##// END OF EJS Templates
infinitepush: introduce server option to route every push to bundlestore...
Pulkit Goyal -
r37223:571f25da default
parent child Browse files
Show More
@@ -0,0 +1,344
1 Testing the case when there is no infinitepush extension present on the client
2 side and the server routes each push to bundlestore. This case is very much
3 similar to CI use case.
4
5 Setup
6 -----
7
8 $ . "$TESTDIR/library-infinitepush.sh"
9 $ cat >> $HGRCPATH <<EOF
10 > [alias]
11 > glog = log -GT "{rev}:{node|short} {desc}\n{phase}"
12 > EOF
13 $ cp $HGRCPATH $TESTTMP/defaulthgrc
14 $ hg init repo
15 $ cd repo
16 $ setupserver
17 $ echo "pushtobundlestore = True" >> .hg/hgrc
18 $ echo "[extensions]" >> .hg/hgrc
19 $ echo "infinitepush=" >> .hg/hgrc
20 $ echo initialcommit > initialcommit
21 $ hg ci -Aqm "initialcommit"
22 $ hg phase --public .
23
24 $ cd ..
25 $ hg clone repo client -q
26 $ cd client
27
28 Pushing a new commit from the client to the server
29 -----------------------------------------------------
30
31 $ echo foobar > a
32 $ hg ci -Aqm "added a"
33 $ hg glog
34 @ 1:6cb0989601f1 added a
35 | draft
36 o 0:67145f466344 initialcommit
37 public
38
39 $ hg push
40 pushing to $TESTTMP/repo
41 searching for changes
42 storing changesets on the bundlestore
43 pushing 1 commit:
44 6cb0989601f1 added a
45
46 $ scratchnodes
47 6cb0989601f1fb5805238edfb16f3606713d9a0b a4c202c147a9c4bb91bbadb56321fc5f3950f7f2
48
49 Understanding how data is stored on the bundlestore in server
50 -------------------------------------------------------------
51
52 There are two things, filebundlestore and index
53 $ ls ../repo/.hg/scratchbranches
54 filebundlestore
55 index
56
57 filebundlestore stores the bundles
58 $ ls ../repo/.hg/scratchbranches/filebundlestore/a4/c2/
59 a4c202c147a9c4bb91bbadb56321fc5f3950f7f2
60
61 index/nodemap stores a map of node id and file in which bundle is stored in filebundlestore
62 $ ls ../repo/.hg/scratchbranches/index/
63 nodemap
64 $ ls ../repo/.hg/scratchbranches/index/nodemap/
65 6cb0989601f1fb5805238edfb16f3606713d9a0b
66
67 $ cd ../repo
68
69 Checking that the commit was not applied to revlog on the server
70 ------------------------------------------------------------------
71
72 $ hg glog
73 @ 0:67145f466344 initialcommit
74 public
75
76 Applying the changeset from the bundlestore
77 --------------------------------------------
78
79 $ hg unbundle .hg/scratchbranches/filebundlestore/a4/c2/a4c202c147a9c4bb91bbadb56321fc5f3950f7f2
80 adding changesets
81 adding manifests
82 adding file changes
83 added 1 changesets with 1 changes to 1 files
84 new changesets 6cb0989601f1
85 (run 'hg update' to get a working copy)
86
87 $ hg glog
88 o 1:6cb0989601f1 added a
89 | public
90 @ 0:67145f466344 initialcommit
91 public
92
93 Pushing more changesets from the local repo
94 --------------------------------------------
95
96 $ cd ../client
97 $ echo b > b
98 $ hg ci -Aqm "added b"
99 $ echo c > c
100 $ hg ci -Aqm "added c"
101 $ hg glog
102 @ 3:bf8a6e3011b3 added c
103 | draft
104 o 2:eaba929e866c added b
105 | draft
106 o 1:6cb0989601f1 added a
107 | public
108 o 0:67145f466344 initialcommit
109 public
110
111 $ hg push
112 pushing to $TESTTMP/repo
113 searching for changes
114 storing changesets on the bundlestore
115 pushing 2 commits:
116 eaba929e866c added b
117 bf8a6e3011b3 added c
118
119 Checking that changesets are not applied on the server
120 ------------------------------------------------------
121
122 $ hg glog -R ../repo
123 o 1:6cb0989601f1 added a
124 | public
125 @ 0:67145f466344 initialcommit
126 public
127
128 Both of the new changesets are stored in a single bundle-file
129 $ scratchnodes
130 6cb0989601f1fb5805238edfb16f3606713d9a0b a4c202c147a9c4bb91bbadb56321fc5f3950f7f2
131 bf8a6e3011b345146bbbedbcb1ebd4837571492a ee41a41cefb7817cbfb235b4f6e9f27dbad6ca1f
132 eaba929e866c59bc9a6aada5a9dd2f6990db83c0 ee41a41cefb7817cbfb235b4f6e9f27dbad6ca1f
133
134 Pushing more changesets to the server
135 -------------------------------------
136
137 $ echo d > d
138 $ hg ci -Aqm "added d"
139 $ echo e > e
140 $ hg ci -Aqm "added e"
141
142 XXX: we should have pushed only the parts which are not in bundlestore
143 $ hg push
144 pushing to $TESTTMP/repo
145 searching for changes
146 storing changesets on the bundlestore
147 pushing 4 commits:
148 eaba929e866c added b
149 bf8a6e3011b3 added c
150 1bb96358eda2 added d
151 b4e4bce66051 added e
152
153 Sneak peek into the bundlestore at the server
154 $ scratchnodes
155 1bb96358eda285b536c6d1c66846a7cdb2336cea 57e00c0d4f26e2a2a72b751b63d9abc4f3eb28e7
156 6cb0989601f1fb5805238edfb16f3606713d9a0b a4c202c147a9c4bb91bbadb56321fc5f3950f7f2
157 b4e4bce660512ad3e71189e14588a70ac8e31fef 57e00c0d4f26e2a2a72b751b63d9abc4f3eb28e7
158 bf8a6e3011b345146bbbedbcb1ebd4837571492a 57e00c0d4f26e2a2a72b751b63d9abc4f3eb28e7
159 eaba929e866c59bc9a6aada5a9dd2f6990db83c0 57e00c0d4f26e2a2a72b751b63d9abc4f3eb28e7
160
161 Checking if `hg pull` pulls something or `hg incoming` shows something
162 -----------------------------------------------------------------------
163
164 $ hg incoming
165 comparing with $TESTTMP/repo
166 searching for changes
167 no changes found
168 [1]
169
170 $ hg pull
171 pulling from $TESTTMP/repo
172 searching for changes
173 no changes found
174
175 Checking storage of phase information with the bundle on bundlestore
176 ---------------------------------------------------------------------
177
178 creating a draft commit
179 $ cat >> $HGRCPATH <<EOF
180 > [phases]
181 > publish = False
182 > EOF
183 $ echo f > f
184 $ hg ci -Aqm "added f"
185 $ hg glog -r '.^::'
186 @ 6:9b42578d4447 added f
187 | draft
188 o 5:b4e4bce66051 added e
189 | public
190 ~
191
192 $ hg push
193 pushing to $TESTTMP/repo
194 searching for changes
195 storing changesets on the bundlestore
196 pushing 5 commits:
197 eaba929e866c added b
198 bf8a6e3011b3 added c
199 1bb96358eda2 added d
200 b4e4bce66051 added e
201 9b42578d4447 added f
202
203 XXX: the phase of 9b42578d4447 should not be changed here
204 $ hg glog -r .
205 @ 6:9b42578d4447 added f
206 | public
207 ~
208
209 applying the bundle on the server to check preservation of phase-information
210
211 $ cd ../repo
212 $ scratchnodes
213 1bb96358eda285b536c6d1c66846a7cdb2336cea 0a6e70ecd5b98d22382f69b93909f557ac6a9927
214 6cb0989601f1fb5805238edfb16f3606713d9a0b a4c202c147a9c4bb91bbadb56321fc5f3950f7f2
215 9b42578d44473575994109161430d65dd147d16d 0a6e70ecd5b98d22382f69b93909f557ac6a9927
216 b4e4bce660512ad3e71189e14588a70ac8e31fef 0a6e70ecd5b98d22382f69b93909f557ac6a9927
217 bf8a6e3011b345146bbbedbcb1ebd4837571492a 0a6e70ecd5b98d22382f69b93909f557ac6a9927
218 eaba929e866c59bc9a6aada5a9dd2f6990db83c0 0a6e70ecd5b98d22382f69b93909f557ac6a9927
219
220 $ hg unbundle .hg/scratchbranches/filebundlestore/0a/6e/0a6e70ecd5b98d22382f69b93909f557ac6a9927
221 adding changesets
222 adding manifests
223 adding file changes
224 added 5 changesets with 5 changes to 5 files
225 new changesets eaba929e866c:9b42578d4447
226 (run 'hg update' to get a working copy)
227
228 $ hg glog
229 o 6:9b42578d4447 added f
230 | draft
231 o 5:b4e4bce66051 added e
232 | public
233 o 4:1bb96358eda2 added d
234 | public
235 o 3:bf8a6e3011b3 added c
236 | public
237 o 2:eaba929e866c added b
238 | public
239 o 1:6cb0989601f1 added a
240 | public
241 @ 0:67145f466344 initialcommit
242 public
243
244 Checking storage of obsmarkers in the bundlestore
245 --------------------------------------------------
246
247 enabling obsmarkers and rebase extension
248
249 $ cat >> $HGRCPATH << EOF
250 > [experimental]
251 > evolution = all
252 > [extensions]
253 > rebase =
254 > EOF
255
256 $ cd ../client
257
258 $ hg phase -r . --draft --force
259 $ hg rebase -r 6 -d 3
260 rebasing 6:9b42578d4447 "added f" (tip)
261
262 $ hg glog
263 @ 7:99949238d9ac added f
264 | draft
265 | o 5:b4e4bce66051 added e
266 | | public
267 | o 4:1bb96358eda2 added d
268 |/ public
269 o 3:bf8a6e3011b3 added c
270 | public
271 o 2:eaba929e866c added b
272 | public
273 o 1:6cb0989601f1 added a
274 | public
275 o 0:67145f466344 initialcommit
276 public
277
278 $ hg push -f
279 pushing to $TESTTMP/repo
280 searching for changes
281 storing changesets on the bundlestore
282 pushing 1 commit:
283 99949238d9ac added f
284
285 XXX: the phase should not have changed here
286 $ hg glog -r .
287 @ 7:99949238d9ac added f
288 | public
289 ~
290
291 Unbundling on server to see obsmarkers being applied
292
293 $ cd ../repo
294
295 $ scratchnodes
296 1bb96358eda285b536c6d1c66846a7cdb2336cea 0a6e70ecd5b98d22382f69b93909f557ac6a9927
297 6cb0989601f1fb5805238edfb16f3606713d9a0b a4c202c147a9c4bb91bbadb56321fc5f3950f7f2
298 99949238d9ac7f2424a33a46dface6f866afd059 090a24fe63f31d3b4bee714447f835c8c362ff57
299 9b42578d44473575994109161430d65dd147d16d 0a6e70ecd5b98d22382f69b93909f557ac6a9927
300 b4e4bce660512ad3e71189e14588a70ac8e31fef 0a6e70ecd5b98d22382f69b93909f557ac6a9927
301 bf8a6e3011b345146bbbedbcb1ebd4837571492a 0a6e70ecd5b98d22382f69b93909f557ac6a9927
302 eaba929e866c59bc9a6aada5a9dd2f6990db83c0 0a6e70ecd5b98d22382f69b93909f557ac6a9927
303
304 $ hg glog
305 o 6:9b42578d4447 added f
306 | draft
307 o 5:b4e4bce66051 added e
308 | public
309 o 4:1bb96358eda2 added d
310 | public
311 o 3:bf8a6e3011b3 added c
312 | public
313 o 2:eaba929e866c added b
314 | public
315 o 1:6cb0989601f1 added a
316 | public
317 @ 0:67145f466344 initialcommit
318 public
319
320 $ hg unbundle .hg/scratchbranches/filebundlestore/09/0a/090a24fe63f31d3b4bee714447f835c8c362ff57
321 adding changesets
322 adding manifests
323 adding file changes
324 added 1 changesets with 0 changes to 1 files (+1 heads)
325 1 new obsolescence markers
326 obsoleted 1 changesets
327 new changesets 99949238d9ac
328 (run 'hg heads' to see heads, 'hg merge' to merge)
329
330 $ hg glog
331 o 7:99949238d9ac added f
332 | draft
333 | o 5:b4e4bce66051 added e
334 | | public
335 | o 4:1bb96358eda2 added d
336 |/ public
337 o 3:bf8a6e3011b3 added c
338 | public
339 o 2:eaba929e866c added b
340 | public
341 o 1:6cb0989601f1 added a
342 | public
343 @ 0:67145f466344 initialcommit
344 public
@@ -1,1126 +1,1186
1 # Infinite push
1 # Infinite push
2 #
2 #
3 # Copyright 2016 Facebook, Inc.
3 # Copyright 2016 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """ store some pushes in a remote blob store on the server (EXPERIMENTAL)
7 """ store some pushes in a remote blob store on the server (EXPERIMENTAL)
8
8
9 [infinitepush]
9 [infinitepush]
10 # Server-side and client-side option. Pattern of the infinitepush bookmark
10 # Server-side and client-side option. Pattern of the infinitepush bookmark
11 branchpattern = PATTERN
11 branchpattern = PATTERN
12
12
13 # Server or client
13 # Server or client
14 server = False
14 server = False
15
15
16 # Server-side option. Possible values: 'disk' or 'sql'. Fails if not set
16 # Server-side option. Possible values: 'disk' or 'sql'. Fails if not set
17 indextype = disk
17 indextype = disk
18
18
19 # Server-side option. Used only if indextype=sql.
19 # Server-side option. Used only if indextype=sql.
20 # Format: 'IP:PORT:DB_NAME:USER:PASSWORD'
20 # Format: 'IP:PORT:DB_NAME:USER:PASSWORD'
21 sqlhost = IP:PORT:DB_NAME:USER:PASSWORD
21 sqlhost = IP:PORT:DB_NAME:USER:PASSWORD
22
22
23 # Server-side option. Used only if indextype=disk.
23 # Server-side option. Used only if indextype=disk.
24 # Filesystem path to the index store
24 # Filesystem path to the index store
25 indexpath = PATH
25 indexpath = PATH
26
26
27 # Server-side option. Possible values: 'disk' or 'external'
27 # Server-side option. Possible values: 'disk' or 'external'
28 # Fails if not set
28 # Fails if not set
29 storetype = disk
29 storetype = disk
30
30
31 # Server-side option.
31 # Server-side option.
32 # Path to the binary that will save bundle to the bundlestore
32 # Path to the binary that will save bundle to the bundlestore
33 # Formatted cmd line will be passed to it (see `put_args`)
33 # Formatted cmd line will be passed to it (see `put_args`)
34 put_binary = put
34 put_binary = put
35
35
36 # Serser-side option. Used only if storetype=external.
36 # Serser-side option. Used only if storetype=external.
37 # Format cmd-line string for put binary. Placeholder: {filename}
37 # Format cmd-line string for put binary. Placeholder: {filename}
38 put_args = {filename}
38 put_args = {filename}
39
39
40 # Server-side option.
40 # Server-side option.
41 # Path to the binary that get bundle from the bundlestore.
41 # Path to the binary that get bundle from the bundlestore.
42 # Formatted cmd line will be passed to it (see `get_args`)
42 # Formatted cmd line will be passed to it (see `get_args`)
43 get_binary = get
43 get_binary = get
44
44
45 # Serser-side option. Used only if storetype=external.
45 # Serser-side option. Used only if storetype=external.
46 # Format cmd-line string for get binary. Placeholders: {filename} {handle}
46 # Format cmd-line string for get binary. Placeholders: {filename} {handle}
47 get_args = {filename} {handle}
47 get_args = {filename} {handle}
48
48
49 # Server-side option
49 # Server-side option
50 logfile = FIlE
50 logfile = FIlE
51
51
52 # Server-side option
52 # Server-side option
53 loglevel = DEBUG
53 loglevel = DEBUG
54
54
55 # Server-side option. Used only if indextype=sql.
55 # Server-side option. Used only if indextype=sql.
56 # Sets mysql wait_timeout option.
56 # Sets mysql wait_timeout option.
57 waittimeout = 300
57 waittimeout = 300
58
58
59 # Server-side option. Used only if indextype=sql.
59 # Server-side option. Used only if indextype=sql.
60 # Sets mysql innodb_lock_wait_timeout option.
60 # Sets mysql innodb_lock_wait_timeout option.
61 locktimeout = 120
61 locktimeout = 120
62
62
63 # Server-side option. Used only if indextype=sql.
63 # Server-side option. Used only if indextype=sql.
64 # Name of the repository
64 # Name of the repository
65 reponame = ''
65 reponame = ''
66
66
67 # Client-side option. Used by --list-remote option. List of remote scratch
67 # Client-side option. Used by --list-remote option. List of remote scratch
68 # patterns to list if no patterns are specified.
68 # patterns to list if no patterns are specified.
69 defaultremotepatterns = ['*']
69 defaultremotepatterns = ['*']
70
70
71 # Instructs infinitepush to forward all received bundle2 parts to the
71 # Instructs infinitepush to forward all received bundle2 parts to the
72 # bundle for storage. Defaults to False.
72 # bundle for storage. Defaults to False.
73 storeallparts = True
73 storeallparts = True
74
74
75 # routes each incoming push to the bundlestore. defaults to False
76 pushtobundlestore = True
77
75 [remotenames]
78 [remotenames]
76 # Client-side option
79 # Client-side option
77 # This option should be set only if remotenames extension is enabled.
80 # This option should be set only if remotenames extension is enabled.
78 # Whether remote bookmarks are tracked by remotenames extension.
81 # Whether remote bookmarks are tracked by remotenames extension.
79 bookmarks = True
82 bookmarks = True
80 """
83 """
81
84
82 from __future__ import absolute_import
85 from __future__ import absolute_import
83
86
84 import collections
87 import collections
85 import contextlib
88 import contextlib
86 import errno
89 import errno
87 import functools
90 import functools
88 import logging
91 import logging
89 import os
92 import os
90 import random
93 import random
91 import re
94 import re
92 import socket
95 import socket
93 import subprocess
96 import subprocess
94 import tempfile
97 import tempfile
95 import time
98 import time
96
99
97 from mercurial.node import (
100 from mercurial.node import (
98 bin,
101 bin,
99 hex,
102 hex,
100 )
103 )
101
104
102 from mercurial.i18n import _
105 from mercurial.i18n import _
103
106
104 from mercurial.utils import (
107 from mercurial.utils import (
105 procutil,
108 procutil,
106 stringutil,
109 stringutil,
107 )
110 )
108
111
109 from mercurial import (
112 from mercurial import (
110 bundle2,
113 bundle2,
111 changegroup,
114 changegroup,
112 commands,
115 commands,
113 discovery,
116 discovery,
114 encoding,
117 encoding,
115 error,
118 error,
116 exchange,
119 exchange,
117 extensions,
120 extensions,
118 hg,
121 hg,
119 localrepo,
122 localrepo,
120 peer,
123 peer,
121 phases,
124 phases,
122 pushkey,
125 pushkey,
123 registrar,
126 registrar,
124 util,
127 util,
125 wireproto,
128 wireproto,
126 )
129 )
127
130
128 from . import (
131 from . import (
129 bundleparts,
132 bundleparts,
130 common,
133 common,
131 )
134 )
132
135
133 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
136 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
134 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
137 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
135 # be specifying the version(s) of Mercurial they are tested with, or
138 # be specifying the version(s) of Mercurial they are tested with, or
136 # leave the attribute unspecified.
139 # leave the attribute unspecified.
137 testedwith = 'ships-with-hg-core'
140 testedwith = 'ships-with-hg-core'
138
141
139 configtable = {}
142 configtable = {}
140 configitem = registrar.configitem(configtable)
143 configitem = registrar.configitem(configtable)
141
144
142 configitem('infinitepush', 'server',
145 configitem('infinitepush', 'server',
143 default=False,
146 default=False,
144 )
147 )
145 configitem('infinitepush', 'storetype',
148 configitem('infinitepush', 'storetype',
146 default='',
149 default='',
147 )
150 )
148 configitem('infinitepush', 'indextype',
151 configitem('infinitepush', 'indextype',
149 default='',
152 default='',
150 )
153 )
151 configitem('infinitepush', 'indexpath',
154 configitem('infinitepush', 'indexpath',
152 default='',
155 default='',
153 )
156 )
154 configitem('infinitepush', 'storeallparts',
157 configitem('infinitepush', 'storeallparts',
155 default=False,
158 default=False,
156 )
159 )
157 configitem('infinitepush', 'reponame',
160 configitem('infinitepush', 'reponame',
158 default='',
161 default='',
159 )
162 )
160 configitem('scratchbranch', 'storepath',
163 configitem('scratchbranch', 'storepath',
161 default='',
164 default='',
162 )
165 )
163 configitem('infinitepush', 'branchpattern',
166 configitem('infinitepush', 'branchpattern',
164 default='',
167 default='',
165 )
168 )
169 configitem('infinitepush', 'pushtobundlestore',
170 default=False,
171 )
166 configitem('experimental', 'server-bundlestore-bookmark',
172 configitem('experimental', 'server-bundlestore-bookmark',
167 default='',
173 default='',
168 )
174 )
169 configitem('experimental', 'infinitepush-scratchpush',
175 configitem('experimental', 'infinitepush-scratchpush',
170 default=False,
176 default=False,
171 )
177 )
172
178
173 experimental = 'experimental'
179 experimental = 'experimental'
174 configbookmark = 'server-bundlestore-bookmark'
180 configbookmark = 'server-bundlestore-bookmark'
175 configscratchpush = 'infinitepush-scratchpush'
181 configscratchpush = 'infinitepush-scratchpush'
176
182
177 scratchbranchparttype = bundleparts.scratchbranchparttype
183 scratchbranchparttype = bundleparts.scratchbranchparttype
178 revsetpredicate = registrar.revsetpredicate()
184 revsetpredicate = registrar.revsetpredicate()
179 templatekeyword = registrar.templatekeyword()
185 templatekeyword = registrar.templatekeyword()
180 _scratchbranchmatcher = lambda x: False
186 _scratchbranchmatcher = lambda x: False
181 _maybehash = re.compile(r'^[a-f0-9]+$').search
187 _maybehash = re.compile(r'^[a-f0-9]+$').search
182
188
183 def _buildexternalbundlestore(ui):
189 def _buildexternalbundlestore(ui):
184 put_args = ui.configlist('infinitepush', 'put_args', [])
190 put_args = ui.configlist('infinitepush', 'put_args', [])
185 put_binary = ui.config('infinitepush', 'put_binary')
191 put_binary = ui.config('infinitepush', 'put_binary')
186 if not put_binary:
192 if not put_binary:
187 raise error.Abort('put binary is not specified')
193 raise error.Abort('put binary is not specified')
188 get_args = ui.configlist('infinitepush', 'get_args', [])
194 get_args = ui.configlist('infinitepush', 'get_args', [])
189 get_binary = ui.config('infinitepush', 'get_binary')
195 get_binary = ui.config('infinitepush', 'get_binary')
190 if not get_binary:
196 if not get_binary:
191 raise error.Abort('get binary is not specified')
197 raise error.Abort('get binary is not specified')
192 from . import store
198 from . import store
193 return store.externalbundlestore(put_binary, put_args, get_binary, get_args)
199 return store.externalbundlestore(put_binary, put_args, get_binary, get_args)
194
200
195 def _buildsqlindex(ui):
201 def _buildsqlindex(ui):
196 sqlhost = ui.config('infinitepush', 'sqlhost')
202 sqlhost = ui.config('infinitepush', 'sqlhost')
197 if not sqlhost:
203 if not sqlhost:
198 raise error.Abort(_('please set infinitepush.sqlhost'))
204 raise error.Abort(_('please set infinitepush.sqlhost'))
199 host, port, db, user, password = sqlhost.split(':')
205 host, port, db, user, password = sqlhost.split(':')
200 reponame = ui.config('infinitepush', 'reponame')
206 reponame = ui.config('infinitepush', 'reponame')
201 if not reponame:
207 if not reponame:
202 raise error.Abort(_('please set infinitepush.reponame'))
208 raise error.Abort(_('please set infinitepush.reponame'))
203
209
204 logfile = ui.config('infinitepush', 'logfile', '')
210 logfile = ui.config('infinitepush', 'logfile', '')
205 waittimeout = ui.configint('infinitepush', 'waittimeout', 300)
211 waittimeout = ui.configint('infinitepush', 'waittimeout', 300)
206 locktimeout = ui.configint('infinitepush', 'locktimeout', 120)
212 locktimeout = ui.configint('infinitepush', 'locktimeout', 120)
207 from . import sqlindexapi
213 from . import sqlindexapi
208 return sqlindexapi.sqlindexapi(
214 return sqlindexapi.sqlindexapi(
209 reponame, host, port, db, user, password,
215 reponame, host, port, db, user, password,
210 logfile, _getloglevel(ui), waittimeout=waittimeout,
216 logfile, _getloglevel(ui), waittimeout=waittimeout,
211 locktimeout=locktimeout)
217 locktimeout=locktimeout)
212
218
213 def _getloglevel(ui):
219 def _getloglevel(ui):
214 loglevel = ui.config('infinitepush', 'loglevel', 'DEBUG')
220 loglevel = ui.config('infinitepush', 'loglevel', 'DEBUG')
215 numeric_loglevel = getattr(logging, loglevel.upper(), None)
221 numeric_loglevel = getattr(logging, loglevel.upper(), None)
216 if not isinstance(numeric_loglevel, int):
222 if not isinstance(numeric_loglevel, int):
217 raise error.Abort(_('invalid log level %s') % loglevel)
223 raise error.Abort(_('invalid log level %s') % loglevel)
218 return numeric_loglevel
224 return numeric_loglevel
219
225
220 def _tryhoist(ui, remotebookmark):
226 def _tryhoist(ui, remotebookmark):
221 '''returns a bookmarks with hoisted part removed
227 '''returns a bookmarks with hoisted part removed
222
228
223 Remotenames extension has a 'hoist' config that allows to use remote
229 Remotenames extension has a 'hoist' config that allows to use remote
224 bookmarks without specifying remote path. For example, 'hg update master'
230 bookmarks without specifying remote path. For example, 'hg update master'
225 works as well as 'hg update remote/master'. We want to allow the same in
231 works as well as 'hg update remote/master'. We want to allow the same in
226 infinitepush.
232 infinitepush.
227 '''
233 '''
228
234
229 if common.isremotebooksenabled(ui):
235 if common.isremotebooksenabled(ui):
230 hoist = ui.config('remotenames', 'hoist') + '/'
236 hoist = ui.config('remotenames', 'hoist') + '/'
231 if remotebookmark.startswith(hoist):
237 if remotebookmark.startswith(hoist):
232 return remotebookmark[len(hoist):]
238 return remotebookmark[len(hoist):]
233 return remotebookmark
239 return remotebookmark
234
240
235 class bundlestore(object):
241 class bundlestore(object):
236 def __init__(self, repo):
242 def __init__(self, repo):
237 self._repo = repo
243 self._repo = repo
238 storetype = self._repo.ui.config('infinitepush', 'storetype', '')
244 storetype = self._repo.ui.config('infinitepush', 'storetype', '')
239 if storetype == 'disk':
245 if storetype == 'disk':
240 from . import store
246 from . import store
241 self.store = store.filebundlestore(self._repo.ui, self._repo)
247 self.store = store.filebundlestore(self._repo.ui, self._repo)
242 elif storetype == 'external':
248 elif storetype == 'external':
243 self.store = _buildexternalbundlestore(self._repo.ui)
249 self.store = _buildexternalbundlestore(self._repo.ui)
244 else:
250 else:
245 raise error.Abort(
251 raise error.Abort(
246 _('unknown infinitepush store type specified %s') % storetype)
252 _('unknown infinitepush store type specified %s') % storetype)
247
253
248 indextype = self._repo.ui.config('infinitepush', 'indextype', '')
254 indextype = self._repo.ui.config('infinitepush', 'indextype', '')
249 if indextype == 'disk':
255 if indextype == 'disk':
250 from . import fileindexapi
256 from . import fileindexapi
251 self.index = fileindexapi.fileindexapi(self._repo)
257 self.index = fileindexapi.fileindexapi(self._repo)
252 elif indextype == 'sql':
258 elif indextype == 'sql':
253 self.index = _buildsqlindex(self._repo.ui)
259 self.index = _buildsqlindex(self._repo.ui)
254 else:
260 else:
255 raise error.Abort(
261 raise error.Abort(
256 _('unknown infinitepush index type specified %s') % indextype)
262 _('unknown infinitepush index type specified %s') % indextype)
257
263
258 def _isserver(ui):
264 def _isserver(ui):
259 return ui.configbool('infinitepush', 'server')
265 return ui.configbool('infinitepush', 'server')
260
266
261 def reposetup(ui, repo):
267 def reposetup(ui, repo):
262 if _isserver(ui) and repo.local():
268 if _isserver(ui) and repo.local():
263 repo.bundlestore = bundlestore(repo)
269 repo.bundlestore = bundlestore(repo)
264
270
265 def extsetup(ui):
271 def extsetup(ui):
266 commonsetup(ui)
272 commonsetup(ui)
267 if _isserver(ui):
273 if _isserver(ui):
268 serverextsetup(ui)
274 serverextsetup(ui)
269 else:
275 else:
270 clientextsetup(ui)
276 clientextsetup(ui)
271
277
272 def commonsetup(ui):
278 def commonsetup(ui):
273 wireproto.commands['listkeyspatterns'] = (
279 wireproto.commands['listkeyspatterns'] = (
274 wireprotolistkeyspatterns, 'namespace patterns')
280 wireprotolistkeyspatterns, 'namespace patterns')
275 scratchbranchpat = ui.config('infinitepush', 'branchpattern')
281 scratchbranchpat = ui.config('infinitepush', 'branchpattern')
276 if scratchbranchpat:
282 if scratchbranchpat:
277 global _scratchbranchmatcher
283 global _scratchbranchmatcher
278 kind, pat, _scratchbranchmatcher = \
284 kind, pat, _scratchbranchmatcher = \
279 stringutil.stringmatcher(scratchbranchpat)
285 stringutil.stringmatcher(scratchbranchpat)
280
286
281 def serverextsetup(ui):
287 def serverextsetup(ui):
282 origpushkeyhandler = bundle2.parthandlermapping['pushkey']
288 origpushkeyhandler = bundle2.parthandlermapping['pushkey']
283
289
284 def newpushkeyhandler(*args, **kwargs):
290 def newpushkeyhandler(*args, **kwargs):
285 bundle2pushkey(origpushkeyhandler, *args, **kwargs)
291 bundle2pushkey(origpushkeyhandler, *args, **kwargs)
286 newpushkeyhandler.params = origpushkeyhandler.params
292 newpushkeyhandler.params = origpushkeyhandler.params
287 bundle2.parthandlermapping['pushkey'] = newpushkeyhandler
293 bundle2.parthandlermapping['pushkey'] = newpushkeyhandler
288
294
289 orighandlephasehandler = bundle2.parthandlermapping['phase-heads']
295 orighandlephasehandler = bundle2.parthandlermapping['phase-heads']
290 newphaseheadshandler = lambda *args, **kwargs: \
296 newphaseheadshandler = lambda *args, **kwargs: \
291 bundle2handlephases(orighandlephasehandler, *args, **kwargs)
297 bundle2handlephases(orighandlephasehandler, *args, **kwargs)
292 newphaseheadshandler.params = orighandlephasehandler.params
298 newphaseheadshandler.params = orighandlephasehandler.params
293 bundle2.parthandlermapping['phase-heads'] = newphaseheadshandler
299 bundle2.parthandlermapping['phase-heads'] = newphaseheadshandler
294
300
295 extensions.wrapfunction(localrepo.localrepository, 'listkeys',
301 extensions.wrapfunction(localrepo.localrepository, 'listkeys',
296 localrepolistkeys)
302 localrepolistkeys)
297 wireproto.commands['lookup'] = (
303 wireproto.commands['lookup'] = (
298 _lookupwrap(wireproto.commands['lookup'][0]), 'key')
304 _lookupwrap(wireproto.commands['lookup'][0]), 'key')
299 extensions.wrapfunction(exchange, 'getbundlechunks', getbundlechunks)
305 extensions.wrapfunction(exchange, 'getbundlechunks', getbundlechunks)
300
306
301 extensions.wrapfunction(bundle2, 'processparts', processparts)
307 extensions.wrapfunction(bundle2, 'processparts', processparts)
302
308
303 def clientextsetup(ui):
309 def clientextsetup(ui):
304 entry = extensions.wrapcommand(commands.table, 'push', _push)
310 entry = extensions.wrapcommand(commands.table, 'push', _push)
305
311
306 entry[1].append(
312 entry[1].append(
307 ('', 'bundle-store', None,
313 ('', 'bundle-store', None,
308 _('force push to go to bundle store (EXPERIMENTAL)')))
314 _('force push to go to bundle store (EXPERIMENTAL)')))
309
315
310 extensions.wrapcommand(commands.table, 'pull', _pull)
316 extensions.wrapcommand(commands.table, 'pull', _pull)
311
317
312 extensions.wrapfunction(discovery, 'checkheads', _checkheads)
318 extensions.wrapfunction(discovery, 'checkheads', _checkheads)
313
319
314 wireproto.wirepeer.listkeyspatterns = listkeyspatterns
320 wireproto.wirepeer.listkeyspatterns = listkeyspatterns
315
321
316 partorder = exchange.b2partsgenorder
322 partorder = exchange.b2partsgenorder
317 index = partorder.index('changeset')
323 index = partorder.index('changeset')
318 partorder.insert(
324 partorder.insert(
319 index, partorder.pop(partorder.index(scratchbranchparttype)))
325 index, partorder.pop(partorder.index(scratchbranchparttype)))
320
326
321 def _checkheads(orig, pushop):
327 def _checkheads(orig, pushop):
322 if pushop.ui.configbool(experimental, configscratchpush, False):
328 if pushop.ui.configbool(experimental, configscratchpush, False):
323 return
329 return
324 return orig(pushop)
330 return orig(pushop)
325
331
326 def wireprotolistkeyspatterns(repo, proto, namespace, patterns):
332 def wireprotolistkeyspatterns(repo, proto, namespace, patterns):
327 patterns = wireproto.decodelist(patterns)
333 patterns = wireproto.decodelist(patterns)
328 d = repo.listkeys(encoding.tolocal(namespace), patterns).iteritems()
334 d = repo.listkeys(encoding.tolocal(namespace), patterns).iteritems()
329 return pushkey.encodekeys(d)
335 return pushkey.encodekeys(d)
330
336
331 def localrepolistkeys(orig, self, namespace, patterns=None):
337 def localrepolistkeys(orig, self, namespace, patterns=None):
332 if namespace == 'bookmarks' and patterns:
338 if namespace == 'bookmarks' and patterns:
333 index = self.bundlestore.index
339 index = self.bundlestore.index
334 results = {}
340 results = {}
335 bookmarks = orig(self, namespace)
341 bookmarks = orig(self, namespace)
336 for pattern in patterns:
342 for pattern in patterns:
337 results.update(index.getbookmarks(pattern))
343 results.update(index.getbookmarks(pattern))
338 if pattern.endswith('*'):
344 if pattern.endswith('*'):
339 pattern = 're:^' + pattern[:-1] + '.*'
345 pattern = 're:^' + pattern[:-1] + '.*'
340 kind, pat, matcher = stringutil.stringmatcher(pattern)
346 kind, pat, matcher = stringutil.stringmatcher(pattern)
341 for bookmark, node in bookmarks.iteritems():
347 for bookmark, node in bookmarks.iteritems():
342 if matcher(bookmark):
348 if matcher(bookmark):
343 results[bookmark] = node
349 results[bookmark] = node
344 return results
350 return results
345 else:
351 else:
346 return orig(self, namespace)
352 return orig(self, namespace)
347
353
348 @peer.batchable
354 @peer.batchable
349 def listkeyspatterns(self, namespace, patterns):
355 def listkeyspatterns(self, namespace, patterns):
350 if not self.capable('pushkey'):
356 if not self.capable('pushkey'):
351 yield {}, None
357 yield {}, None
352 f = peer.future()
358 f = peer.future()
353 self.ui.debug('preparing listkeys for "%s" with pattern "%s"\n' %
359 self.ui.debug('preparing listkeys for "%s" with pattern "%s"\n' %
354 (namespace, patterns))
360 (namespace, patterns))
355 yield {
361 yield {
356 'namespace': encoding.fromlocal(namespace),
362 'namespace': encoding.fromlocal(namespace),
357 'patterns': wireproto.encodelist(patterns)
363 'patterns': wireproto.encodelist(patterns)
358 }, f
364 }, f
359 d = f.value
365 d = f.value
360 self.ui.debug('received listkey for "%s": %i bytes\n'
366 self.ui.debug('received listkey for "%s": %i bytes\n'
361 % (namespace, len(d)))
367 % (namespace, len(d)))
362 yield pushkey.decodekeys(d)
368 yield pushkey.decodekeys(d)
363
369
364 def _readbundlerevs(bundlerepo):
370 def _readbundlerevs(bundlerepo):
365 return list(bundlerepo.revs('bundle()'))
371 return list(bundlerepo.revs('bundle()'))
366
372
367 def _includefilelogstobundle(bundlecaps, bundlerepo, bundlerevs, ui):
373 def _includefilelogstobundle(bundlecaps, bundlerepo, bundlerevs, ui):
368 '''Tells remotefilelog to include all changed files to the changegroup
374 '''Tells remotefilelog to include all changed files to the changegroup
369
375
370 By default remotefilelog doesn't include file content to the changegroup.
376 By default remotefilelog doesn't include file content to the changegroup.
371 But we need to include it if we are fetching from bundlestore.
377 But we need to include it if we are fetching from bundlestore.
372 '''
378 '''
373 changedfiles = set()
379 changedfiles = set()
374 cl = bundlerepo.changelog
380 cl = bundlerepo.changelog
375 for r in bundlerevs:
381 for r in bundlerevs:
376 # [3] means changed files
382 # [3] means changed files
377 changedfiles.update(cl.read(r)[3])
383 changedfiles.update(cl.read(r)[3])
378 if not changedfiles:
384 if not changedfiles:
379 return bundlecaps
385 return bundlecaps
380
386
381 changedfiles = '\0'.join(changedfiles)
387 changedfiles = '\0'.join(changedfiles)
382 newcaps = []
388 newcaps = []
383 appended = False
389 appended = False
384 for cap in (bundlecaps or []):
390 for cap in (bundlecaps or []):
385 if cap.startswith('excludepattern='):
391 if cap.startswith('excludepattern='):
386 newcaps.append('\0'.join((cap, changedfiles)))
392 newcaps.append('\0'.join((cap, changedfiles)))
387 appended = True
393 appended = True
388 else:
394 else:
389 newcaps.append(cap)
395 newcaps.append(cap)
390 if not appended:
396 if not appended:
391 # Not found excludepattern cap. Just append it
397 # Not found excludepattern cap. Just append it
392 newcaps.append('excludepattern=' + changedfiles)
398 newcaps.append('excludepattern=' + changedfiles)
393
399
394 return newcaps
400 return newcaps
395
401
396 def _rebundle(bundlerepo, bundleroots, unknownhead):
402 def _rebundle(bundlerepo, bundleroots, unknownhead):
397 '''
403 '''
398 Bundle may include more revision then user requested. For example,
404 Bundle may include more revision then user requested. For example,
399 if user asks for revision but bundle also consists its descendants.
405 if user asks for revision but bundle also consists its descendants.
400 This function will filter out all revision that user is not requested.
406 This function will filter out all revision that user is not requested.
401 '''
407 '''
402 parts = []
408 parts = []
403
409
404 version = '02'
410 version = '02'
405 outgoing = discovery.outgoing(bundlerepo, commonheads=bundleroots,
411 outgoing = discovery.outgoing(bundlerepo, commonheads=bundleroots,
406 missingheads=[unknownhead])
412 missingheads=[unknownhead])
407 cgstream = changegroup.makestream(bundlerepo, outgoing, version, 'pull')
413 cgstream = changegroup.makestream(bundlerepo, outgoing, version, 'pull')
408 cgstream = util.chunkbuffer(cgstream).read()
414 cgstream = util.chunkbuffer(cgstream).read()
409 cgpart = bundle2.bundlepart('changegroup', data=cgstream)
415 cgpart = bundle2.bundlepart('changegroup', data=cgstream)
410 cgpart.addparam('version', version)
416 cgpart.addparam('version', version)
411 parts.append(cgpart)
417 parts.append(cgpart)
412
418
413 return parts
419 return parts
414
420
415 def _getbundleroots(oldrepo, bundlerepo, bundlerevs):
421 def _getbundleroots(oldrepo, bundlerepo, bundlerevs):
416 cl = bundlerepo.changelog
422 cl = bundlerepo.changelog
417 bundleroots = []
423 bundleroots = []
418 for rev in bundlerevs:
424 for rev in bundlerevs:
419 node = cl.node(rev)
425 node = cl.node(rev)
420 parents = cl.parents(node)
426 parents = cl.parents(node)
421 for parent in parents:
427 for parent in parents:
422 # include all revs that exist in the main repo
428 # include all revs that exist in the main repo
423 # to make sure that bundle may apply client-side
429 # to make sure that bundle may apply client-side
424 if parent in oldrepo:
430 if parent in oldrepo:
425 bundleroots.append(parent)
431 bundleroots.append(parent)
426 return bundleroots
432 return bundleroots
427
433
428 def _needsrebundling(head, bundlerepo):
434 def _needsrebundling(head, bundlerepo):
429 bundleheads = list(bundlerepo.revs('heads(bundle())'))
435 bundleheads = list(bundlerepo.revs('heads(bundle())'))
430 return not (len(bundleheads) == 1 and
436 return not (len(bundleheads) == 1 and
431 bundlerepo[bundleheads[0]].node() == head)
437 bundlerepo[bundleheads[0]].node() == head)
432
438
433 def _generateoutputparts(head, bundlerepo, bundleroots, bundlefile):
439 def _generateoutputparts(head, bundlerepo, bundleroots, bundlefile):
434 '''generates bundle that will be send to the user
440 '''generates bundle that will be send to the user
435
441
436 returns tuple with raw bundle string and bundle type
442 returns tuple with raw bundle string and bundle type
437 '''
443 '''
438 parts = []
444 parts = []
439 if not _needsrebundling(head, bundlerepo):
445 if not _needsrebundling(head, bundlerepo):
440 with util.posixfile(bundlefile, "rb") as f:
446 with util.posixfile(bundlefile, "rb") as f:
441 unbundler = exchange.readbundle(bundlerepo.ui, f, bundlefile)
447 unbundler = exchange.readbundle(bundlerepo.ui, f, bundlefile)
442 if isinstance(unbundler, changegroup.cg1unpacker):
448 if isinstance(unbundler, changegroup.cg1unpacker):
443 part = bundle2.bundlepart('changegroup',
449 part = bundle2.bundlepart('changegroup',
444 data=unbundler._stream.read())
450 data=unbundler._stream.read())
445 part.addparam('version', '01')
451 part.addparam('version', '01')
446 parts.append(part)
452 parts.append(part)
447 elif isinstance(unbundler, bundle2.unbundle20):
453 elif isinstance(unbundler, bundle2.unbundle20):
448 haschangegroup = False
454 haschangegroup = False
449 for part in unbundler.iterparts():
455 for part in unbundler.iterparts():
450 if part.type == 'changegroup':
456 if part.type == 'changegroup':
451 haschangegroup = True
457 haschangegroup = True
452 newpart = bundle2.bundlepart(part.type, data=part.read())
458 newpart = bundle2.bundlepart(part.type, data=part.read())
453 for key, value in part.params.iteritems():
459 for key, value in part.params.iteritems():
454 newpart.addparam(key, value)
460 newpart.addparam(key, value)
455 parts.append(newpart)
461 parts.append(newpart)
456
462
457 if not haschangegroup:
463 if not haschangegroup:
458 raise error.Abort(
464 raise error.Abort(
459 'unexpected bundle without changegroup part, ' +
465 'unexpected bundle without changegroup part, ' +
460 'head: %s' % hex(head),
466 'head: %s' % hex(head),
461 hint='report to administrator')
467 hint='report to administrator')
462 else:
468 else:
463 raise error.Abort('unknown bundle type')
469 raise error.Abort('unknown bundle type')
464 else:
470 else:
465 parts = _rebundle(bundlerepo, bundleroots, head)
471 parts = _rebundle(bundlerepo, bundleroots, head)
466
472
467 return parts
473 return parts
468
474
469 def getbundlechunks(orig, repo, source, heads=None, bundlecaps=None, **kwargs):
475 def getbundlechunks(orig, repo, source, heads=None, bundlecaps=None, **kwargs):
470 heads = heads or []
476 heads = heads or []
471 # newheads are parents of roots of scratch bundles that were requested
477 # newheads are parents of roots of scratch bundles that were requested
472 newphases = {}
478 newphases = {}
473 scratchbundles = []
479 scratchbundles = []
474 newheads = []
480 newheads = []
475 scratchheads = []
481 scratchheads = []
476 nodestobundle = {}
482 nodestobundle = {}
477 allbundlestocleanup = []
483 allbundlestocleanup = []
478 try:
484 try:
479 for head in heads:
485 for head in heads:
480 if head not in repo.changelog.nodemap:
486 if head not in repo.changelog.nodemap:
481 if head not in nodestobundle:
487 if head not in nodestobundle:
482 newbundlefile = common.downloadbundle(repo, head)
488 newbundlefile = common.downloadbundle(repo, head)
483 bundlepath = "bundle:%s+%s" % (repo.root, newbundlefile)
489 bundlepath = "bundle:%s+%s" % (repo.root, newbundlefile)
484 bundlerepo = hg.repository(repo.ui, bundlepath)
490 bundlerepo = hg.repository(repo.ui, bundlepath)
485
491
486 allbundlestocleanup.append((bundlerepo, newbundlefile))
492 allbundlestocleanup.append((bundlerepo, newbundlefile))
487 bundlerevs = set(_readbundlerevs(bundlerepo))
493 bundlerevs = set(_readbundlerevs(bundlerepo))
488 bundlecaps = _includefilelogstobundle(
494 bundlecaps = _includefilelogstobundle(
489 bundlecaps, bundlerepo, bundlerevs, repo.ui)
495 bundlecaps, bundlerepo, bundlerevs, repo.ui)
490 cl = bundlerepo.changelog
496 cl = bundlerepo.changelog
491 bundleroots = _getbundleroots(repo, bundlerepo, bundlerevs)
497 bundleroots = _getbundleroots(repo, bundlerepo, bundlerevs)
492 for rev in bundlerevs:
498 for rev in bundlerevs:
493 node = cl.node(rev)
499 node = cl.node(rev)
494 newphases[hex(node)] = str(phases.draft)
500 newphases[hex(node)] = str(phases.draft)
495 nodestobundle[node] = (bundlerepo, bundleroots,
501 nodestobundle[node] = (bundlerepo, bundleroots,
496 newbundlefile)
502 newbundlefile)
497
503
498 scratchbundles.append(
504 scratchbundles.append(
499 _generateoutputparts(head, *nodestobundle[head]))
505 _generateoutputparts(head, *nodestobundle[head]))
500 newheads.extend(bundleroots)
506 newheads.extend(bundleroots)
501 scratchheads.append(head)
507 scratchheads.append(head)
502 finally:
508 finally:
503 for bundlerepo, bundlefile in allbundlestocleanup:
509 for bundlerepo, bundlefile in allbundlestocleanup:
504 bundlerepo.close()
510 bundlerepo.close()
505 try:
511 try:
506 os.unlink(bundlefile)
512 os.unlink(bundlefile)
507 except (IOError, OSError):
513 except (IOError, OSError):
508 # if we can't cleanup the file then just ignore the error,
514 # if we can't cleanup the file then just ignore the error,
509 # no need to fail
515 # no need to fail
510 pass
516 pass
511
517
512 pullfrombundlestore = bool(scratchbundles)
518 pullfrombundlestore = bool(scratchbundles)
513 wrappedchangegrouppart = False
519 wrappedchangegrouppart = False
514 wrappedlistkeys = False
520 wrappedlistkeys = False
515 oldchangegrouppart = exchange.getbundle2partsmapping['changegroup']
521 oldchangegrouppart = exchange.getbundle2partsmapping['changegroup']
516 try:
522 try:
517 def _changegrouppart(bundler, *args, **kwargs):
523 def _changegrouppart(bundler, *args, **kwargs):
518 # Order is important here. First add non-scratch part
524 # Order is important here. First add non-scratch part
519 # and only then add parts with scratch bundles because
525 # and only then add parts with scratch bundles because
520 # non-scratch part contains parents of roots of scratch bundles.
526 # non-scratch part contains parents of roots of scratch bundles.
521 result = oldchangegrouppart(bundler, *args, **kwargs)
527 result = oldchangegrouppart(bundler, *args, **kwargs)
522 for bundle in scratchbundles:
528 for bundle in scratchbundles:
523 for part in bundle:
529 for part in bundle:
524 bundler.addpart(part)
530 bundler.addpart(part)
525 return result
531 return result
526
532
527 exchange.getbundle2partsmapping['changegroup'] = _changegrouppart
533 exchange.getbundle2partsmapping['changegroup'] = _changegrouppart
528 wrappedchangegrouppart = True
534 wrappedchangegrouppart = True
529
535
530 def _listkeys(orig, self, namespace):
536 def _listkeys(orig, self, namespace):
531 origvalues = orig(self, namespace)
537 origvalues = orig(self, namespace)
532 if namespace == 'phases' and pullfrombundlestore:
538 if namespace == 'phases' and pullfrombundlestore:
533 if origvalues.get('publishing') == 'True':
539 if origvalues.get('publishing') == 'True':
534 # Make repo non-publishing to preserve draft phase
540 # Make repo non-publishing to preserve draft phase
535 del origvalues['publishing']
541 del origvalues['publishing']
536 origvalues.update(newphases)
542 origvalues.update(newphases)
537 return origvalues
543 return origvalues
538
544
539 extensions.wrapfunction(localrepo.localrepository, 'listkeys',
545 extensions.wrapfunction(localrepo.localrepository, 'listkeys',
540 _listkeys)
546 _listkeys)
541 wrappedlistkeys = True
547 wrappedlistkeys = True
542 heads = list((set(newheads) | set(heads)) - set(scratchheads))
548 heads = list((set(newheads) | set(heads)) - set(scratchheads))
543 result = orig(repo, source, heads=heads,
549 result = orig(repo, source, heads=heads,
544 bundlecaps=bundlecaps, **kwargs)
550 bundlecaps=bundlecaps, **kwargs)
545 finally:
551 finally:
546 if wrappedchangegrouppart:
552 if wrappedchangegrouppart:
547 exchange.getbundle2partsmapping['changegroup'] = oldchangegrouppart
553 exchange.getbundle2partsmapping['changegroup'] = oldchangegrouppart
548 if wrappedlistkeys:
554 if wrappedlistkeys:
549 extensions.unwrapfunction(localrepo.localrepository, 'listkeys',
555 extensions.unwrapfunction(localrepo.localrepository, 'listkeys',
550 _listkeys)
556 _listkeys)
551 return result
557 return result
552
558
553 def _lookupwrap(orig):
559 def _lookupwrap(orig):
554 def _lookup(repo, proto, key):
560 def _lookup(repo, proto, key):
555 localkey = encoding.tolocal(key)
561 localkey = encoding.tolocal(key)
556
562
557 if isinstance(localkey, str) and _scratchbranchmatcher(localkey):
563 if isinstance(localkey, str) and _scratchbranchmatcher(localkey):
558 scratchnode = repo.bundlestore.index.getnode(localkey)
564 scratchnode = repo.bundlestore.index.getnode(localkey)
559 if scratchnode:
565 if scratchnode:
560 return "%s %s\n" % (1, scratchnode)
566 return "%s %s\n" % (1, scratchnode)
561 else:
567 else:
562 return "%s %s\n" % (0, 'scratch branch %s not found' % localkey)
568 return "%s %s\n" % (0, 'scratch branch %s not found' % localkey)
563 else:
569 else:
564 try:
570 try:
565 r = hex(repo.lookup(localkey))
571 r = hex(repo.lookup(localkey))
566 return "%s %s\n" % (1, r)
572 return "%s %s\n" % (1, r)
567 except Exception as inst:
573 except Exception as inst:
568 if repo.bundlestore.index.getbundle(localkey):
574 if repo.bundlestore.index.getbundle(localkey):
569 return "%s %s\n" % (1, localkey)
575 return "%s %s\n" % (1, localkey)
570 else:
576 else:
571 r = str(inst)
577 r = str(inst)
572 return "%s %s\n" % (0, r)
578 return "%s %s\n" % (0, r)
573 return _lookup
579 return _lookup
574
580
575 def _pull(orig, ui, repo, source="default", **opts):
581 def _pull(orig, ui, repo, source="default", **opts):
576 # Copy paste from `pull` command
582 # Copy paste from `pull` command
577 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
583 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
578
584
579 scratchbookmarks = {}
585 scratchbookmarks = {}
580 unfi = repo.unfiltered()
586 unfi = repo.unfiltered()
581 unknownnodes = []
587 unknownnodes = []
582 for rev in opts.get('rev', []):
588 for rev in opts.get('rev', []):
583 if rev not in unfi:
589 if rev not in unfi:
584 unknownnodes.append(rev)
590 unknownnodes.append(rev)
585 if opts.get('bookmark'):
591 if opts.get('bookmark'):
586 bookmarks = []
592 bookmarks = []
587 revs = opts.get('rev') or []
593 revs = opts.get('rev') or []
588 for bookmark in opts.get('bookmark'):
594 for bookmark in opts.get('bookmark'):
589 if _scratchbranchmatcher(bookmark):
595 if _scratchbranchmatcher(bookmark):
590 # rev is not known yet
596 # rev is not known yet
591 # it will be fetched with listkeyspatterns next
597 # it will be fetched with listkeyspatterns next
592 scratchbookmarks[bookmark] = 'REVTOFETCH'
598 scratchbookmarks[bookmark] = 'REVTOFETCH'
593 else:
599 else:
594 bookmarks.append(bookmark)
600 bookmarks.append(bookmark)
595
601
596 if scratchbookmarks:
602 if scratchbookmarks:
597 other = hg.peer(repo, opts, source)
603 other = hg.peer(repo, opts, source)
598 fetchedbookmarks = other.listkeyspatterns(
604 fetchedbookmarks = other.listkeyspatterns(
599 'bookmarks', patterns=scratchbookmarks)
605 'bookmarks', patterns=scratchbookmarks)
600 for bookmark in scratchbookmarks:
606 for bookmark in scratchbookmarks:
601 if bookmark not in fetchedbookmarks:
607 if bookmark not in fetchedbookmarks:
602 raise error.Abort('remote bookmark %s not found!' %
608 raise error.Abort('remote bookmark %s not found!' %
603 bookmark)
609 bookmark)
604 scratchbookmarks[bookmark] = fetchedbookmarks[bookmark]
610 scratchbookmarks[bookmark] = fetchedbookmarks[bookmark]
605 revs.append(fetchedbookmarks[bookmark])
611 revs.append(fetchedbookmarks[bookmark])
606 opts['bookmark'] = bookmarks
612 opts['bookmark'] = bookmarks
607 opts['rev'] = revs
613 opts['rev'] = revs
608
614
609 if scratchbookmarks or unknownnodes:
615 if scratchbookmarks or unknownnodes:
610 # Set anyincoming to True
616 # Set anyincoming to True
611 extensions.wrapfunction(discovery, 'findcommonincoming',
617 extensions.wrapfunction(discovery, 'findcommonincoming',
612 _findcommonincoming)
618 _findcommonincoming)
613 try:
619 try:
614 # Remote scratch bookmarks will be deleted because remotenames doesn't
620 # Remote scratch bookmarks will be deleted because remotenames doesn't
615 # know about them. Let's save it before pull and restore after
621 # know about them. Let's save it before pull and restore after
616 remotescratchbookmarks = _readscratchremotebookmarks(ui, repo, source)
622 remotescratchbookmarks = _readscratchremotebookmarks(ui, repo, source)
617 result = orig(ui, repo, source, **opts)
623 result = orig(ui, repo, source, **opts)
618 # TODO(stash): race condition is possible
624 # TODO(stash): race condition is possible
619 # if scratch bookmarks was updated right after orig.
625 # if scratch bookmarks was updated right after orig.
620 # But that's unlikely and shouldn't be harmful.
626 # But that's unlikely and shouldn't be harmful.
621 if common.isremotebooksenabled(ui):
627 if common.isremotebooksenabled(ui):
622 remotescratchbookmarks.update(scratchbookmarks)
628 remotescratchbookmarks.update(scratchbookmarks)
623 _saveremotebookmarks(repo, remotescratchbookmarks, source)
629 _saveremotebookmarks(repo, remotescratchbookmarks, source)
624 else:
630 else:
625 _savelocalbookmarks(repo, scratchbookmarks)
631 _savelocalbookmarks(repo, scratchbookmarks)
626 return result
632 return result
627 finally:
633 finally:
628 if scratchbookmarks:
634 if scratchbookmarks:
629 extensions.unwrapfunction(discovery, 'findcommonincoming')
635 extensions.unwrapfunction(discovery, 'findcommonincoming')
630
636
631 def _readscratchremotebookmarks(ui, repo, other):
637 def _readscratchremotebookmarks(ui, repo, other):
632 if common.isremotebooksenabled(ui):
638 if common.isremotebooksenabled(ui):
633 remotenamesext = extensions.find('remotenames')
639 remotenamesext = extensions.find('remotenames')
634 remotepath = remotenamesext.activepath(repo.ui, other)
640 remotepath = remotenamesext.activepath(repo.ui, other)
635 result = {}
641 result = {}
636 # Let's refresh remotenames to make sure we have it up to date
642 # Let's refresh remotenames to make sure we have it up to date
637 # Seems that `repo.names['remotebookmarks']` may return stale bookmarks
643 # Seems that `repo.names['remotebookmarks']` may return stale bookmarks
638 # and it results in deleting scratch bookmarks. Our best guess how to
644 # and it results in deleting scratch bookmarks. Our best guess how to
639 # fix it is to use `clearnames()`
645 # fix it is to use `clearnames()`
640 repo._remotenames.clearnames()
646 repo._remotenames.clearnames()
641 for remotebookmark in repo.names['remotebookmarks'].listnames(repo):
647 for remotebookmark in repo.names['remotebookmarks'].listnames(repo):
642 path, bookname = remotenamesext.splitremotename(remotebookmark)
648 path, bookname = remotenamesext.splitremotename(remotebookmark)
643 if path == remotepath and _scratchbranchmatcher(bookname):
649 if path == remotepath and _scratchbranchmatcher(bookname):
644 nodes = repo.names['remotebookmarks'].nodes(repo,
650 nodes = repo.names['remotebookmarks'].nodes(repo,
645 remotebookmark)
651 remotebookmark)
646 if nodes:
652 if nodes:
647 result[bookname] = hex(nodes[0])
653 result[bookname] = hex(nodes[0])
648 return result
654 return result
649 else:
655 else:
650 return {}
656 return {}
651
657
652 def _saveremotebookmarks(repo, newbookmarks, remote):
658 def _saveremotebookmarks(repo, newbookmarks, remote):
653 remotenamesext = extensions.find('remotenames')
659 remotenamesext = extensions.find('remotenames')
654 remotepath = remotenamesext.activepath(repo.ui, remote)
660 remotepath = remotenamesext.activepath(repo.ui, remote)
655 branches = collections.defaultdict(list)
661 branches = collections.defaultdict(list)
656 bookmarks = {}
662 bookmarks = {}
657 remotenames = remotenamesext.readremotenames(repo)
663 remotenames = remotenamesext.readremotenames(repo)
658 for hexnode, nametype, remote, rname in remotenames:
664 for hexnode, nametype, remote, rname in remotenames:
659 if remote != remotepath:
665 if remote != remotepath:
660 continue
666 continue
661 if nametype == 'bookmarks':
667 if nametype == 'bookmarks':
662 if rname in newbookmarks:
668 if rname in newbookmarks:
663 # It's possible if we have a normal bookmark that matches
669 # It's possible if we have a normal bookmark that matches
664 # scratch branch pattern. In this case just use the current
670 # scratch branch pattern. In this case just use the current
665 # bookmark node
671 # bookmark node
666 del newbookmarks[rname]
672 del newbookmarks[rname]
667 bookmarks[rname] = hexnode
673 bookmarks[rname] = hexnode
668 elif nametype == 'branches':
674 elif nametype == 'branches':
669 # saveremotenames expects 20 byte binary nodes for branches
675 # saveremotenames expects 20 byte binary nodes for branches
670 branches[rname].append(bin(hexnode))
676 branches[rname].append(bin(hexnode))
671
677
672 for bookmark, hexnode in newbookmarks.iteritems():
678 for bookmark, hexnode in newbookmarks.iteritems():
673 bookmarks[bookmark] = hexnode
679 bookmarks[bookmark] = hexnode
674 remotenamesext.saveremotenames(repo, remotepath, branches, bookmarks)
680 remotenamesext.saveremotenames(repo, remotepath, branches, bookmarks)
675
681
676 def _savelocalbookmarks(repo, bookmarks):
682 def _savelocalbookmarks(repo, bookmarks):
677 if not bookmarks:
683 if not bookmarks:
678 return
684 return
679 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
685 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
680 changes = []
686 changes = []
681 for scratchbook, node in bookmarks.iteritems():
687 for scratchbook, node in bookmarks.iteritems():
682 changectx = repo[node]
688 changectx = repo[node]
683 changes.append((scratchbook, changectx.node()))
689 changes.append((scratchbook, changectx.node()))
684 repo._bookmarks.applychanges(repo, tr, changes)
690 repo._bookmarks.applychanges(repo, tr, changes)
685
691
686 def _findcommonincoming(orig, *args, **kwargs):
692 def _findcommonincoming(orig, *args, **kwargs):
687 common, inc, remoteheads = orig(*args, **kwargs)
693 common, inc, remoteheads = orig(*args, **kwargs)
688 return common, True, remoteheads
694 return common, True, remoteheads
689
695
690 def _push(orig, ui, repo, dest=None, *args, **opts):
696 def _push(orig, ui, repo, dest=None, *args, **opts):
691
697
692 bookmark = opts.get('bookmark')
698 bookmark = opts.get('bookmark')
693 # we only support pushing one infinitepush bookmark at once
699 # we only support pushing one infinitepush bookmark at once
694 if len(bookmark) == 1:
700 if len(bookmark) == 1:
695 bookmark = bookmark[0]
701 bookmark = bookmark[0]
696 else:
702 else:
697 bookmark = ''
703 bookmark = ''
698
704
699 oldphasemove = None
705 oldphasemove = None
700 overrides = {(experimental, configbookmark): bookmark}
706 overrides = {(experimental, configbookmark): bookmark}
701
707
702 with ui.configoverride(overrides, 'infinitepush'):
708 with ui.configoverride(overrides, 'infinitepush'):
703 scratchpush = opts.get('bundle_store')
709 scratchpush = opts.get('bundle_store')
704 if _scratchbranchmatcher(bookmark):
710 if _scratchbranchmatcher(bookmark):
705 scratchpush = True
711 scratchpush = True
706 # bundle2 can be sent back after push (for example, bundle2
712 # bundle2 can be sent back after push (for example, bundle2
707 # containing `pushkey` part to update bookmarks)
713 # containing `pushkey` part to update bookmarks)
708 ui.setconfig(experimental, 'bundle2.pushback', True)
714 ui.setconfig(experimental, 'bundle2.pushback', True)
709
715
710 if scratchpush:
716 if scratchpush:
711 # this is an infinitepush, we don't want the bookmark to be applied
717 # this is an infinitepush, we don't want the bookmark to be applied
712 # rather that should be stored in the bundlestore
718 # rather that should be stored in the bundlestore
713 opts['bookmark'] = []
719 opts['bookmark'] = []
714 ui.setconfig(experimental, configscratchpush, True)
720 ui.setconfig(experimental, configscratchpush, True)
715 oldphasemove = extensions.wrapfunction(exchange,
721 oldphasemove = extensions.wrapfunction(exchange,
716 '_localphasemove',
722 '_localphasemove',
717 _phasemove)
723 _phasemove)
718 # Copy-paste from `push` command
724 # Copy-paste from `push` command
719 path = ui.paths.getpath(dest, default=('default-push', 'default'))
725 path = ui.paths.getpath(dest, default=('default-push', 'default'))
720 if not path:
726 if not path:
721 raise error.Abort(_('default repository not configured!'),
727 raise error.Abort(_('default repository not configured!'),
722 hint=_("see 'hg help config.paths'"))
728 hint=_("see 'hg help config.paths'"))
723 destpath = path.pushloc or path.loc
729 destpath = path.pushloc or path.loc
724 # Remote scratch bookmarks will be deleted because remotenames doesn't
730 # Remote scratch bookmarks will be deleted because remotenames doesn't
725 # know about them. Let's save it before push and restore after
731 # know about them. Let's save it before push and restore after
726 remotescratchbookmarks = _readscratchremotebookmarks(ui, repo, destpath)
732 remotescratchbookmarks = _readscratchremotebookmarks(ui, repo, destpath)
727 result = orig(ui, repo, dest, *args, **opts)
733 result = orig(ui, repo, dest, *args, **opts)
728 if common.isremotebooksenabled(ui):
734 if common.isremotebooksenabled(ui):
729 if bookmark and scratchpush:
735 if bookmark and scratchpush:
730 other = hg.peer(repo, opts, destpath)
736 other = hg.peer(repo, opts, destpath)
731 fetchedbookmarks = other.listkeyspatterns('bookmarks',
737 fetchedbookmarks = other.listkeyspatterns('bookmarks',
732 patterns=[bookmark])
738 patterns=[bookmark])
733 remotescratchbookmarks.update(fetchedbookmarks)
739 remotescratchbookmarks.update(fetchedbookmarks)
734 _saveremotebookmarks(repo, remotescratchbookmarks, destpath)
740 _saveremotebookmarks(repo, remotescratchbookmarks, destpath)
735 if oldphasemove:
741 if oldphasemove:
736 exchange._localphasemove = oldphasemove
742 exchange._localphasemove = oldphasemove
737 return result
743 return result
738
744
739 def _deleteinfinitepushbookmarks(ui, repo, path, names):
745 def _deleteinfinitepushbookmarks(ui, repo, path, names):
740 """Prune remote names by removing the bookmarks we don't want anymore,
746 """Prune remote names by removing the bookmarks we don't want anymore,
741 then writing the result back to disk
747 then writing the result back to disk
742 """
748 """
743 remotenamesext = extensions.find('remotenames')
749 remotenamesext = extensions.find('remotenames')
744
750
745 # remotename format is:
751 # remotename format is:
746 # (node, nametype ("branches" or "bookmarks"), remote, name)
752 # (node, nametype ("branches" or "bookmarks"), remote, name)
747 nametype_idx = 1
753 nametype_idx = 1
748 remote_idx = 2
754 remote_idx = 2
749 name_idx = 3
755 name_idx = 3
750 remotenames = [remotename for remotename in \
756 remotenames = [remotename for remotename in \
751 remotenamesext.readremotenames(repo) \
757 remotenamesext.readremotenames(repo) \
752 if remotename[remote_idx] == path]
758 if remotename[remote_idx] == path]
753 remote_bm_names = [remotename[name_idx] for remotename in \
759 remote_bm_names = [remotename[name_idx] for remotename in \
754 remotenames if remotename[nametype_idx] == "bookmarks"]
760 remotenames if remotename[nametype_idx] == "bookmarks"]
755
761
756 for name in names:
762 for name in names:
757 if name not in remote_bm_names:
763 if name not in remote_bm_names:
758 raise error.Abort(_("infinitepush bookmark '{}' does not exist "
764 raise error.Abort(_("infinitepush bookmark '{}' does not exist "
759 "in path '{}'").format(name, path))
765 "in path '{}'").format(name, path))
760
766
761 bookmarks = {}
767 bookmarks = {}
762 branches = collections.defaultdict(list)
768 branches = collections.defaultdict(list)
763 for node, nametype, remote, name in remotenames:
769 for node, nametype, remote, name in remotenames:
764 if nametype == "bookmarks" and name not in names:
770 if nametype == "bookmarks" and name not in names:
765 bookmarks[name] = node
771 bookmarks[name] = node
766 elif nametype == "branches":
772 elif nametype == "branches":
767 # saveremotenames wants binary nodes for branches
773 # saveremotenames wants binary nodes for branches
768 branches[name].append(bin(node))
774 branches[name].append(bin(node))
769
775
770 remotenamesext.saveremotenames(repo, path, branches, bookmarks)
776 remotenamesext.saveremotenames(repo, path, branches, bookmarks)
771
777
772 def _phasemove(orig, pushop, nodes, phase=phases.public):
778 def _phasemove(orig, pushop, nodes, phase=phases.public):
773 """prevent commits from being marked public
779 """prevent commits from being marked public
774
780
775 Since these are going to a scratch branch, they aren't really being
781 Since these are going to a scratch branch, they aren't really being
776 published."""
782 published."""
777
783
778 if phase != phases.public:
784 if phase != phases.public:
779 orig(pushop, nodes, phase)
785 orig(pushop, nodes, phase)
780
786
781 @exchange.b2partsgenerator(scratchbranchparttype)
787 @exchange.b2partsgenerator(scratchbranchparttype)
782 def partgen(pushop, bundler):
788 def partgen(pushop, bundler):
783 bookmark = pushop.ui.config(experimental, configbookmark)
789 bookmark = pushop.ui.config(experimental, configbookmark)
784 scratchpush = pushop.ui.configbool(experimental, configscratchpush)
790 scratchpush = pushop.ui.configbool(experimental, configscratchpush)
785 if 'changesets' in pushop.stepsdone or not scratchpush:
791 if 'changesets' in pushop.stepsdone or not scratchpush:
786 return
792 return
787
793
788 if scratchbranchparttype not in bundle2.bundle2caps(pushop.remote):
794 if scratchbranchparttype not in bundle2.bundle2caps(pushop.remote):
789 return
795 return
790
796
791 pushop.stepsdone.add('changesets')
797 pushop.stepsdone.add('changesets')
792 if not pushop.outgoing.missing:
798 if not pushop.outgoing.missing:
793 pushop.ui.status(_('no changes found\n'))
799 pushop.ui.status(_('no changes found\n'))
794 pushop.cgresult = 0
800 pushop.cgresult = 0
795 return
801 return
796
802
797 # This parameter tells the server that the following bundle is an
803 # This parameter tells the server that the following bundle is an
798 # infinitepush. This let's it switch the part processing to our infinitepush
804 # infinitepush. This let's it switch the part processing to our infinitepush
799 # code path.
805 # code path.
800 bundler.addparam("infinitepush", "True")
806 bundler.addparam("infinitepush", "True")
801
807
802 scratchparts = bundleparts.getscratchbranchparts(pushop.repo,
808 scratchparts = bundleparts.getscratchbranchparts(pushop.repo,
803 pushop.remote,
809 pushop.remote,
804 pushop.outgoing,
810 pushop.outgoing,
805 pushop.ui,
811 pushop.ui,
806 bookmark)
812 bookmark)
807
813
808 for scratchpart in scratchparts:
814 for scratchpart in scratchparts:
809 bundler.addpart(scratchpart)
815 bundler.addpart(scratchpart)
810
816
811 def handlereply(op):
817 def handlereply(op):
812 # server either succeeds or aborts; no code to read
818 # server either succeeds or aborts; no code to read
813 pushop.cgresult = 1
819 pushop.cgresult = 1
814
820
815 return handlereply
821 return handlereply
816
822
817 bundle2.capabilities[bundleparts.scratchbranchparttype] = ()
823 bundle2.capabilities[bundleparts.scratchbranchparttype] = ()
818
824
819 def _getrevs(bundle, oldnode, force, bookmark):
825 def _getrevs(bundle, oldnode, force, bookmark):
820 'extracts and validates the revs to be imported'
826 'extracts and validates the revs to be imported'
821 revs = [bundle[r] for r in bundle.revs('sort(bundle())')]
827 revs = [bundle[r] for r in bundle.revs('sort(bundle())')]
822
828
823 # new bookmark
829 # new bookmark
824 if oldnode is None:
830 if oldnode is None:
825 return revs
831 return revs
826
832
827 # Fast forward update
833 # Fast forward update
828 if oldnode in bundle and list(bundle.set('bundle() & %s::', oldnode)):
834 if oldnode in bundle and list(bundle.set('bundle() & %s::', oldnode)):
829 return revs
835 return revs
830
836
831 return revs
837 return revs
832
838
833 @contextlib.contextmanager
839 @contextlib.contextmanager
834 def logservicecall(logger, service, **kwargs):
840 def logservicecall(logger, service, **kwargs):
835 start = time.time()
841 start = time.time()
836 logger(service, eventtype='start', **kwargs)
842 logger(service, eventtype='start', **kwargs)
837 try:
843 try:
838 yield
844 yield
839 logger(service, eventtype='success',
845 logger(service, eventtype='success',
840 elapsedms=(time.time() - start) * 1000, **kwargs)
846 elapsedms=(time.time() - start) * 1000, **kwargs)
841 except Exception as e:
847 except Exception as e:
842 logger(service, eventtype='failure',
848 logger(service, eventtype='failure',
843 elapsedms=(time.time() - start) * 1000, errormsg=str(e),
849 elapsedms=(time.time() - start) * 1000, errormsg=str(e),
844 **kwargs)
850 **kwargs)
845 raise
851 raise
846
852
847 def _getorcreateinfinitepushlogger(op):
853 def _getorcreateinfinitepushlogger(op):
848 logger = op.records['infinitepushlogger']
854 logger = op.records['infinitepushlogger']
849 if not logger:
855 if not logger:
850 ui = op.repo.ui
856 ui = op.repo.ui
851 try:
857 try:
852 username = procutil.getuser()
858 username = procutil.getuser()
853 except Exception:
859 except Exception:
854 username = 'unknown'
860 username = 'unknown'
855 # Generate random request id to be able to find all logged entries
861 # Generate random request id to be able to find all logged entries
856 # for the same request. Since requestid is pseudo-generated it may
862 # for the same request. Since requestid is pseudo-generated it may
857 # not be unique, but we assume that (hostname, username, requestid)
863 # not be unique, but we assume that (hostname, username, requestid)
858 # is unique.
864 # is unique.
859 random.seed()
865 random.seed()
860 requestid = random.randint(0, 2000000000)
866 requestid = random.randint(0, 2000000000)
861 hostname = socket.gethostname()
867 hostname = socket.gethostname()
862 logger = functools.partial(ui.log, 'infinitepush', user=username,
868 logger = functools.partial(ui.log, 'infinitepush', user=username,
863 requestid=requestid, hostname=hostname,
869 requestid=requestid, hostname=hostname,
864 reponame=ui.config('infinitepush',
870 reponame=ui.config('infinitepush',
865 'reponame'))
871 'reponame'))
866 op.records.add('infinitepushlogger', logger)
872 op.records.add('infinitepushlogger', logger)
867 else:
873 else:
868 logger = logger[0]
874 logger = logger[0]
869 return logger
875 return logger
870
876
877 def storetobundlestore(orig, repo, op, unbundler):
878 """stores the incoming bundle coming from push command to the bundlestore
879 instead of applying on the revlogs"""
880
881 repo.ui.status(_("storing changesets on the bundlestore\n"))
882 bundler = bundle2.bundle20(repo.ui)
883
884 # processing each part and storing it in bundler
885 with bundle2.partiterator(repo, op, unbundler) as parts:
886 for part in parts:
887 bundlepart = None
888 if part.type == 'replycaps':
889 # This configures the current operation to allow reply parts.
890 bundle2._processpart(op, part)
891 else:
892 bundlepart = bundle2.bundlepart(part.type, data=part.read())
893 for key, value in part.params.iteritems():
894 bundlepart.addparam(key, value)
895
896 # Certain parts require a response
897 if part.type in ('pushkey', 'changegroup'):
898 if op.reply is not None:
899 rpart = op.reply.newpart('reply:%s' % part.type)
900 rpart.addparam('in-reply-to', str(part.id),
901 mandatory=False)
902 rpart.addparam('return', '1', mandatory=False)
903
904 op.records.add(part.type, {
905 'return': 1,
906 })
907 if bundlepart:
908 bundler.addpart(bundlepart)
909
910 # storing the bundle in the bundlestore
911 buf = util.chunkbuffer(bundler.getchunks())
912 fd, bundlefile = tempfile.mkstemp()
913 try:
914 try:
915 fp = os.fdopen(fd, 'wb')
916 fp.write(buf.read())
917 finally:
918 fp.close()
919 storebundle(op, {}, bundlefile)
920 finally:
921 try:
922 os.unlink(bundlefile)
923 except Exception:
924 # we would rather see the original exception
925 pass
926
871 def processparts(orig, repo, op, unbundler):
927 def processparts(orig, repo, op, unbundler):
872
928
873 # make sure we don't wrap processparts in case of `hg unbundle`
929 # make sure we don't wrap processparts in case of `hg unbundle`
874 tr = repo.currenttransaction()
930 tr = repo.currenttransaction()
875 if tr:
931 if tr:
876 if tr.names[0].startswith('unbundle'):
932 if tr.names[0].startswith('unbundle'):
877 return orig(repo, op, unbundler)
933 return orig(repo, op, unbundler)
878
934
935 # this server routes each push to bundle store
936 if repo.ui.configbool('infinitepush', 'pushtobundlestore'):
937 return storetobundlestore(orig, repo, op, unbundler)
938
879 if unbundler.params.get('infinitepush') != 'True':
939 if unbundler.params.get('infinitepush') != 'True':
880 return orig(repo, op, unbundler)
940 return orig(repo, op, unbundler)
881
941
882 handleallparts = repo.ui.configbool('infinitepush', 'storeallparts')
942 handleallparts = repo.ui.configbool('infinitepush', 'storeallparts')
883
943
884 bundler = bundle2.bundle20(repo.ui)
944 bundler = bundle2.bundle20(repo.ui)
885 cgparams = None
945 cgparams = None
886 with bundle2.partiterator(repo, op, unbundler) as parts:
946 with bundle2.partiterator(repo, op, unbundler) as parts:
887 for part in parts:
947 for part in parts:
888 bundlepart = None
948 bundlepart = None
889 if part.type == 'replycaps':
949 if part.type == 'replycaps':
890 # This configures the current operation to allow reply parts.
950 # This configures the current operation to allow reply parts.
891 bundle2._processpart(op, part)
951 bundle2._processpart(op, part)
892 elif part.type == bundleparts.scratchbranchparttype:
952 elif part.type == bundleparts.scratchbranchparttype:
893 # Scratch branch parts need to be converted to normal
953 # Scratch branch parts need to be converted to normal
894 # changegroup parts, and the extra parameters stored for later
954 # changegroup parts, and the extra parameters stored for later
895 # when we upload to the store. Eventually those parameters will
955 # when we upload to the store. Eventually those parameters will
896 # be put on the actual bundle instead of this part, then we can
956 # be put on the actual bundle instead of this part, then we can
897 # send a vanilla changegroup instead of the scratchbranch part.
957 # send a vanilla changegroup instead of the scratchbranch part.
898 cgversion = part.params.get('cgversion', '01')
958 cgversion = part.params.get('cgversion', '01')
899 bundlepart = bundle2.bundlepart('changegroup', data=part.read())
959 bundlepart = bundle2.bundlepart('changegroup', data=part.read())
900 bundlepart.addparam('version', cgversion)
960 bundlepart.addparam('version', cgversion)
901 cgparams = part.params
961 cgparams = part.params
902
962
903 # If we're not dumping all parts into the new bundle, we need to
963 # If we're not dumping all parts into the new bundle, we need to
904 # alert the future pushkey and phase-heads handler to skip
964 # alert the future pushkey and phase-heads handler to skip
905 # the part.
965 # the part.
906 if not handleallparts:
966 if not handleallparts:
907 op.records.add(scratchbranchparttype + '_skippushkey', True)
967 op.records.add(scratchbranchparttype + '_skippushkey', True)
908 op.records.add(scratchbranchparttype + '_skipphaseheads',
968 op.records.add(scratchbranchparttype + '_skipphaseheads',
909 True)
969 True)
910 else:
970 else:
911 if handleallparts:
971 if handleallparts:
912 # Ideally we would not process any parts, and instead just
972 # Ideally we would not process any parts, and instead just
913 # forward them to the bundle for storage, but since this
973 # forward them to the bundle for storage, but since this
914 # differs from previous behavior, we need to put it behind a
974 # differs from previous behavior, we need to put it behind a
915 # config flag for incremental rollout.
975 # config flag for incremental rollout.
916 bundlepart = bundle2.bundlepart(part.type, data=part.read())
976 bundlepart = bundle2.bundlepart(part.type, data=part.read())
917 for key, value in part.params.iteritems():
977 for key, value in part.params.iteritems():
918 bundlepart.addparam(key, value)
978 bundlepart.addparam(key, value)
919
979
920 # Certain parts require a response
980 # Certain parts require a response
921 if part.type == 'pushkey':
981 if part.type == 'pushkey':
922 if op.reply is not None:
982 if op.reply is not None:
923 rpart = op.reply.newpart('reply:pushkey')
983 rpart = op.reply.newpart('reply:pushkey')
924 rpart.addparam('in-reply-to', str(part.id),
984 rpart.addparam('in-reply-to', str(part.id),
925 mandatory=False)
985 mandatory=False)
926 rpart.addparam('return', '1', mandatory=False)
986 rpart.addparam('return', '1', mandatory=False)
927 else:
987 else:
928 bundle2._processpart(op, part)
988 bundle2._processpart(op, part)
929
989
930 if handleallparts:
990 if handleallparts:
931 op.records.add(part.type, {
991 op.records.add(part.type, {
932 'return': 1,
992 'return': 1,
933 })
993 })
934 if bundlepart:
994 if bundlepart:
935 bundler.addpart(bundlepart)
995 bundler.addpart(bundlepart)
936
996
937 # If commits were sent, store them
997 # If commits were sent, store them
938 if cgparams:
998 if cgparams:
939 buf = util.chunkbuffer(bundler.getchunks())
999 buf = util.chunkbuffer(bundler.getchunks())
940 fd, bundlefile = tempfile.mkstemp()
1000 fd, bundlefile = tempfile.mkstemp()
941 try:
1001 try:
942 try:
1002 try:
943 fp = os.fdopen(fd, 'wb')
1003 fp = os.fdopen(fd, 'wb')
944 fp.write(buf.read())
1004 fp.write(buf.read())
945 finally:
1005 finally:
946 fp.close()
1006 fp.close()
947 storebundle(op, cgparams, bundlefile)
1007 storebundle(op, cgparams, bundlefile)
948 finally:
1008 finally:
949 try:
1009 try:
950 os.unlink(bundlefile)
1010 os.unlink(bundlefile)
951 except Exception:
1011 except Exception:
952 # we would rather see the original exception
1012 # we would rather see the original exception
953 pass
1013 pass
954
1014
955 def storebundle(op, params, bundlefile):
1015 def storebundle(op, params, bundlefile):
956 log = _getorcreateinfinitepushlogger(op)
1016 log = _getorcreateinfinitepushlogger(op)
957 parthandlerstart = time.time()
1017 parthandlerstart = time.time()
958 log(scratchbranchparttype, eventtype='start')
1018 log(scratchbranchparttype, eventtype='start')
959 index = op.repo.bundlestore.index
1019 index = op.repo.bundlestore.index
960 store = op.repo.bundlestore.store
1020 store = op.repo.bundlestore.store
961 op.records.add(scratchbranchparttype + '_skippushkey', True)
1021 op.records.add(scratchbranchparttype + '_skippushkey', True)
962
1022
963 bundle = None
1023 bundle = None
964 try: # guards bundle
1024 try: # guards bundle
965 bundlepath = "bundle:%s+%s" % (op.repo.root, bundlefile)
1025 bundlepath = "bundle:%s+%s" % (op.repo.root, bundlefile)
966 bundle = hg.repository(op.repo.ui, bundlepath)
1026 bundle = hg.repository(op.repo.ui, bundlepath)
967
1027
968 bookmark = params.get('bookmark')
1028 bookmark = params.get('bookmark')
969 bookprevnode = params.get('bookprevnode', '')
1029 bookprevnode = params.get('bookprevnode', '')
970 force = params.get('force')
1030 force = params.get('force')
971
1031
972 if bookmark:
1032 if bookmark:
973 oldnode = index.getnode(bookmark)
1033 oldnode = index.getnode(bookmark)
974 else:
1034 else:
975 oldnode = None
1035 oldnode = None
976 bundleheads = bundle.revs('heads(bundle())')
1036 bundleheads = bundle.revs('heads(bundle())')
977 if bookmark and len(bundleheads) > 1:
1037 if bookmark and len(bundleheads) > 1:
978 raise error.Abort(
1038 raise error.Abort(
979 _('cannot push more than one head to a scratch branch'))
1039 _('cannot push more than one head to a scratch branch'))
980
1040
981 revs = _getrevs(bundle, oldnode, force, bookmark)
1041 revs = _getrevs(bundle, oldnode, force, bookmark)
982
1042
983 # Notify the user of what is being pushed
1043 # Notify the user of what is being pushed
984 plural = 's' if len(revs) > 1 else ''
1044 plural = 's' if len(revs) > 1 else ''
985 op.repo.ui.warn(_("pushing %s commit%s:\n") % (len(revs), plural))
1045 op.repo.ui.warn(_("pushing %s commit%s:\n") % (len(revs), plural))
986 maxoutput = 10
1046 maxoutput = 10
987 for i in range(0, min(len(revs), maxoutput)):
1047 for i in range(0, min(len(revs), maxoutput)):
988 firstline = bundle[revs[i]].description().split('\n')[0][:50]
1048 firstline = bundle[revs[i]].description().split('\n')[0][:50]
989 op.repo.ui.warn((" %s %s\n") % (revs[i], firstline))
1049 op.repo.ui.warn((" %s %s\n") % (revs[i], firstline))
990
1050
991 if len(revs) > maxoutput + 1:
1051 if len(revs) > maxoutput + 1:
992 op.repo.ui.warn((" ...\n"))
1052 op.repo.ui.warn((" ...\n"))
993 firstline = bundle[revs[-1]].description().split('\n')[0][:50]
1053 firstline = bundle[revs[-1]].description().split('\n')[0][:50]
994 op.repo.ui.warn((" %s %s\n") % (revs[-1], firstline))
1054 op.repo.ui.warn((" %s %s\n") % (revs[-1], firstline))
995
1055
996 nodesctx = [bundle[rev] for rev in revs]
1056 nodesctx = [bundle[rev] for rev in revs]
997 inindex = lambda rev: bool(index.getbundle(bundle[rev].hex()))
1057 inindex = lambda rev: bool(index.getbundle(bundle[rev].hex()))
998 if bundleheads:
1058 if bundleheads:
999 newheadscount = sum(not inindex(rev) for rev in bundleheads)
1059 newheadscount = sum(not inindex(rev) for rev in bundleheads)
1000 else:
1060 else:
1001 newheadscount = 0
1061 newheadscount = 0
1002 # If there's a bookmark specified, there should be only one head,
1062 # If there's a bookmark specified, there should be only one head,
1003 # so we choose the last node, which will be that head.
1063 # so we choose the last node, which will be that head.
1004 # If a bug or malicious client allows there to be a bookmark
1064 # If a bug or malicious client allows there to be a bookmark
1005 # with multiple heads, we will place the bookmark on the last head.
1065 # with multiple heads, we will place the bookmark on the last head.
1006 bookmarknode = nodesctx[-1].hex() if nodesctx else None
1066 bookmarknode = nodesctx[-1].hex() if nodesctx else None
1007 key = None
1067 key = None
1008 if newheadscount:
1068 if newheadscount:
1009 with open(bundlefile, 'r') as f:
1069 with open(bundlefile, 'r') as f:
1010 bundledata = f.read()
1070 bundledata = f.read()
1011 with logservicecall(log, 'bundlestore',
1071 with logservicecall(log, 'bundlestore',
1012 bundlesize=len(bundledata)):
1072 bundlesize=len(bundledata)):
1013 bundlesizelimit = 100 * 1024 * 1024 # 100 MB
1073 bundlesizelimit = 100 * 1024 * 1024 # 100 MB
1014 if len(bundledata) > bundlesizelimit:
1074 if len(bundledata) > bundlesizelimit:
1015 error_msg = ('bundle is too big: %d bytes. ' +
1075 error_msg = ('bundle is too big: %d bytes. ' +
1016 'max allowed size is 100 MB')
1076 'max allowed size is 100 MB')
1017 raise error.Abort(error_msg % (len(bundledata),))
1077 raise error.Abort(error_msg % (len(bundledata),))
1018 key = store.write(bundledata)
1078 key = store.write(bundledata)
1019
1079
1020 with logservicecall(log, 'index', newheadscount=newheadscount), index:
1080 with logservicecall(log, 'index', newheadscount=newheadscount), index:
1021 if key:
1081 if key:
1022 index.addbundle(key, nodesctx)
1082 index.addbundle(key, nodesctx)
1023 if bookmark:
1083 if bookmark:
1024 index.addbookmark(bookmark, bookmarknode)
1084 index.addbookmark(bookmark, bookmarknode)
1025 _maybeaddpushbackpart(op, bookmark, bookmarknode,
1085 _maybeaddpushbackpart(op, bookmark, bookmarknode,
1026 bookprevnode, params)
1086 bookprevnode, params)
1027 log(scratchbranchparttype, eventtype='success',
1087 log(scratchbranchparttype, eventtype='success',
1028 elapsedms=(time.time() - parthandlerstart) * 1000)
1088 elapsedms=(time.time() - parthandlerstart) * 1000)
1029
1089
1030 except Exception as e:
1090 except Exception as e:
1031 log(scratchbranchparttype, eventtype='failure',
1091 log(scratchbranchparttype, eventtype='failure',
1032 elapsedms=(time.time() - parthandlerstart) * 1000,
1092 elapsedms=(time.time() - parthandlerstart) * 1000,
1033 errormsg=str(e))
1093 errormsg=str(e))
1034 raise
1094 raise
1035 finally:
1095 finally:
1036 if bundle:
1096 if bundle:
1037 bundle.close()
1097 bundle.close()
1038
1098
1039 @bundle2.parthandler(scratchbranchparttype,
1099 @bundle2.parthandler(scratchbranchparttype,
1040 ('bookmark', 'bookprevnode', 'force',
1100 ('bookmark', 'bookprevnode', 'force',
1041 'pushbackbookmarks', 'cgversion'))
1101 'pushbackbookmarks', 'cgversion'))
1042 def bundle2scratchbranch(op, part):
1102 def bundle2scratchbranch(op, part):
1043 '''unbundle a bundle2 part containing a changegroup to store'''
1103 '''unbundle a bundle2 part containing a changegroup to store'''
1044
1104
1045 bundler = bundle2.bundle20(op.repo.ui)
1105 bundler = bundle2.bundle20(op.repo.ui)
1046 cgversion = part.params.get('cgversion', '01')
1106 cgversion = part.params.get('cgversion', '01')
1047 cgpart = bundle2.bundlepart('changegroup', data=part.read())
1107 cgpart = bundle2.bundlepart('changegroup', data=part.read())
1048 cgpart.addparam('version', cgversion)
1108 cgpart.addparam('version', cgversion)
1049 bundler.addpart(cgpart)
1109 bundler.addpart(cgpart)
1050 buf = util.chunkbuffer(bundler.getchunks())
1110 buf = util.chunkbuffer(bundler.getchunks())
1051
1111
1052 fd, bundlefile = tempfile.mkstemp()
1112 fd, bundlefile = tempfile.mkstemp()
1053 try:
1113 try:
1054 try:
1114 try:
1055 fp = os.fdopen(fd, 'wb')
1115 fp = os.fdopen(fd, 'wb')
1056 fp.write(buf.read())
1116 fp.write(buf.read())
1057 finally:
1117 finally:
1058 fp.close()
1118 fp.close()
1059 storebundle(op, part.params, bundlefile)
1119 storebundle(op, part.params, bundlefile)
1060 finally:
1120 finally:
1061 try:
1121 try:
1062 os.unlink(bundlefile)
1122 os.unlink(bundlefile)
1063 except OSError as e:
1123 except OSError as e:
1064 if e.errno != errno.ENOENT:
1124 if e.errno != errno.ENOENT:
1065 raise
1125 raise
1066
1126
1067 return 1
1127 return 1
1068
1128
1069 def _maybeaddpushbackpart(op, bookmark, newnode, oldnode, params):
1129 def _maybeaddpushbackpart(op, bookmark, newnode, oldnode, params):
1070 if params.get('pushbackbookmarks'):
1130 if params.get('pushbackbookmarks'):
1071 if op.reply and 'pushback' in op.reply.capabilities:
1131 if op.reply and 'pushback' in op.reply.capabilities:
1072 params = {
1132 params = {
1073 'namespace': 'bookmarks',
1133 'namespace': 'bookmarks',
1074 'key': bookmark,
1134 'key': bookmark,
1075 'new': newnode,
1135 'new': newnode,
1076 'old': oldnode,
1136 'old': oldnode,
1077 }
1137 }
1078 op.reply.newpart('pushkey', mandatoryparams=params.iteritems())
1138 op.reply.newpart('pushkey', mandatoryparams=params.iteritems())
1079
1139
1080 def bundle2pushkey(orig, op, part):
1140 def bundle2pushkey(orig, op, part):
1081 '''Wrapper of bundle2.handlepushkey()
1141 '''Wrapper of bundle2.handlepushkey()
1082
1142
1083 The only goal is to skip calling the original function if flag is set.
1143 The only goal is to skip calling the original function if flag is set.
1084 It's set if infinitepush push is happening.
1144 It's set if infinitepush push is happening.
1085 '''
1145 '''
1086 if op.records[scratchbranchparttype + '_skippushkey']:
1146 if op.records[scratchbranchparttype + '_skippushkey']:
1087 if op.reply is not None:
1147 if op.reply is not None:
1088 rpart = op.reply.newpart('reply:pushkey')
1148 rpart = op.reply.newpart('reply:pushkey')
1089 rpart.addparam('in-reply-to', str(part.id), mandatory=False)
1149 rpart.addparam('in-reply-to', str(part.id), mandatory=False)
1090 rpart.addparam('return', '1', mandatory=False)
1150 rpart.addparam('return', '1', mandatory=False)
1091 return 1
1151 return 1
1092
1152
1093 return orig(op, part)
1153 return orig(op, part)
1094
1154
1095 def bundle2handlephases(orig, op, part):
1155 def bundle2handlephases(orig, op, part):
1096 '''Wrapper of bundle2.handlephases()
1156 '''Wrapper of bundle2.handlephases()
1097
1157
1098 The only goal is to skip calling the original function if flag is set.
1158 The only goal is to skip calling the original function if flag is set.
1099 It's set if infinitepush push is happening.
1159 It's set if infinitepush push is happening.
1100 '''
1160 '''
1101
1161
1102 if op.records[scratchbranchparttype + '_skipphaseheads']:
1162 if op.records[scratchbranchparttype + '_skipphaseheads']:
1103 return
1163 return
1104
1164
1105 return orig(op, part)
1165 return orig(op, part)
1106
1166
1107 def _asyncsavemetadata(root, nodes):
1167 def _asyncsavemetadata(root, nodes):
1108 '''starts a separate process that fills metadata for the nodes
1168 '''starts a separate process that fills metadata for the nodes
1109
1169
1110 This function creates a separate process and doesn't wait for it's
1170 This function creates a separate process and doesn't wait for it's
1111 completion. This was done to avoid slowing down pushes
1171 completion. This was done to avoid slowing down pushes
1112 '''
1172 '''
1113
1173
1114 maxnodes = 50
1174 maxnodes = 50
1115 if len(nodes) > maxnodes:
1175 if len(nodes) > maxnodes:
1116 return
1176 return
1117 nodesargs = []
1177 nodesargs = []
1118 for node in nodes:
1178 for node in nodes:
1119 nodesargs.append('--node')
1179 nodesargs.append('--node')
1120 nodesargs.append(node)
1180 nodesargs.append(node)
1121 with open(os.devnull, 'w+b') as devnull:
1181 with open(os.devnull, 'w+b') as devnull:
1122 cmdline = [util.hgexecutable(), 'debugfillinfinitepushmetadata',
1182 cmdline = [util.hgexecutable(), 'debugfillinfinitepushmetadata',
1123 '-R', root] + nodesargs
1183 '-R', root] + nodesargs
1124 # Process will run in background. We don't care about the return code
1184 # Process will run in background. We don't care about the return code
1125 subprocess.Popen(cmdline, close_fds=True, shell=False,
1185 subprocess.Popen(cmdline, close_fds=True, shell=False,
1126 stdin=devnull, stdout=devnull, stderr=devnull)
1186 stdin=devnull, stdout=devnull, stderr=devnull)
General Comments 0
You need to be logged in to leave comments. Login now