##// END OF EJS Templates
clonebundles: filter on SNI requirement...
Gregory Szorc -
r26645:2faa7671 default
parent child Browse files
Show More
@@ -1,85 +1,98 b''
1 # This software may be used and distributed according to the terms of the
1 # This software may be used and distributed according to the terms of the
2 # GNU General Public License version 2 or any later version.
2 # GNU General Public License version 2 or any later version.
3
3
4 """server side extension to advertise pre-generated bundles to seed clones.
4 """server side extension to advertise pre-generated bundles to seed clones.
5
5
6 The extension essentially serves the content of a .hg/clonebundles.manifest
6 The extension essentially serves the content of a .hg/clonebundles.manifest
7 file to clients that request it.
7 file to clients that request it.
8
8
9 The clonebundles.manifest file contains a list of URLs and attributes. URLs
9 The clonebundles.manifest file contains a list of URLs and attributes. URLs
10 hold pre-generated bundles that a client fetches and applies. After applying
10 hold pre-generated bundles that a client fetches and applies. After applying
11 the pre-generated bundle, the client will connect back to the original server
11 the pre-generated bundle, the client will connect back to the original server
12 and pull data not in the pre-generated bundle.
12 and pull data not in the pre-generated bundle.
13
13
14 Manifest File Format:
14 Manifest File Format:
15
15
16 The manifest file contains a newline (\n) delimited list of entries.
16 The manifest file contains a newline (\n) delimited list of entries.
17
17
18 Each line in this file defines an available bundle. Lines have the format:
18 Each line in this file defines an available bundle. Lines have the format:
19
19
20 <URL> [<key>=<value]
20 <URL> [<key>=<value]
21
21
22 That is, a URL followed by extra metadata describing it. Metadata keys and
22 That is, a URL followed by extra metadata describing it. Metadata keys and
23 values should be URL encoded.
23 values should be URL encoded.
24
24
25 This metadata is optional. It is up to server operators to populate this
25 This metadata is optional. It is up to server operators to populate this
26 metadata.
26 metadata.
27
27
28 Keys in UPPERCASE are reserved for use by Mercurial. All non-uppercase keys
28 Keys in UPPERCASE are reserved for use by Mercurial. All non-uppercase keys
29 can be used by site installations.
29 can be used by site installations.
30
30
31 The server operator is responsible for generating the bundle manifest file.
31 The server operator is responsible for generating the bundle manifest file.
32
32
33 Metadata Attributes:
33 Metadata Attributes:
34
34
35 BUNDLESPEC
35 BUNDLESPEC
36 A "bundle specification" string that describes the type of the bundle.
36 A "bundle specification" string that describes the type of the bundle.
37
37
38 These are string values that are accepted by the "--type" argument of
38 These are string values that are accepted by the "--type" argument of
39 `hg bundle`.
39 `hg bundle`.
40
40
41 The values are parsed in strict mode, which means they must be of the
41 The values are parsed in strict mode, which means they must be of the
42 "<compression>-<type>" form. See
42 "<compression>-<type>" form. See
43 mercurial.exchange.parsebundlespec() for more details.
43 mercurial.exchange.parsebundlespec() for more details.
44
44
45 Clients will automatically filter out specifications that are unknown or
45 Clients will automatically filter out specifications that are unknown or
46 unsupported so they won't attempt to download something that likely won't
46 unsupported so they won't attempt to download something that likely won't
47 apply.
47 apply.
48
48
49 The actual value doesn't impact client behavior beyond filtering:
49 The actual value doesn't impact client behavior beyond filtering:
50 clients will still sniff the bundle type from the header of downloaded
50 clients will still sniff the bundle type from the header of downloaded
51 files.
51 files.
52
53 REQUIRESNI
54 Whether Server Name Indication (SNI) is required to connect to the URL.
55 SNI allows servers to use multiple certificates on the same IP. It is
56 somewhat common in CDNs and other hosting providers. Older Python
57 versions do not support SNI. Defining this attribute enables clients
58 with older Python versions to filter this entry.
59
60 If this is defined, it is important to advertise a non-SNI fallback
61 URL or clients running old Python releases may not be able to clone
62 with the clonebundles facility.
63
64 Value should be "true".
52 """
65 """
53
66
54 from mercurial import (
67 from mercurial import (
55 extensions,
68 extensions,
56 wireproto,
69 wireproto,
57 )
70 )
58
71
59 testedwith = 'internal'
72 testedwith = 'internal'
60
73
61 def capabilities(orig, repo, proto):
74 def capabilities(orig, repo, proto):
62 caps = orig(repo, proto)
75 caps = orig(repo, proto)
63
76
64 # Only advertise if a manifest exists. This does add some I/O to requests.
77 # Only advertise if a manifest exists. This does add some I/O to requests.
65 # But this should be cheaper than a wasted network round trip due to
78 # But this should be cheaper than a wasted network round trip due to
66 # missing file.
79 # missing file.
67 if repo.opener.exists('clonebundles.manifest'):
80 if repo.opener.exists('clonebundles.manifest'):
68 caps.append('clonebundles')
81 caps.append('clonebundles')
69
82
70 return caps
83 return caps
71
84
72 @wireproto.wireprotocommand('clonebundles', '')
85 @wireproto.wireprotocommand('clonebundles', '')
73 def bundles(repo, proto):
86 def bundles(repo, proto):
74 """Server command for returning info for available bundles to seed clones.
87 """Server command for returning info for available bundles to seed clones.
75
88
76 Clients will parse this response and determine what bundle to fetch.
89 Clients will parse this response and determine what bundle to fetch.
77
90
78 Other extensions may wrap this command to filter or dynamically emit
91 Other extensions may wrap this command to filter or dynamically emit
79 data depending on the request. e.g. you could advertise URLs for
92 data depending on the request. e.g. you could advertise URLs for
80 the closest data center given the client's IP address.
93 the closest data center given the client's IP address.
81 """
94 """
82 return repo.opener.tryread('clonebundles.manifest')
95 return repo.opener.tryread('clonebundles.manifest')
83
96
84 def extsetup(ui):
97 def extsetup(ui):
85 extensions.wrapfunction(wireproto, '_capabilities', capabilities)
98 extensions.wrapfunction(wireproto, '_capabilities', capabilities)
@@ -1,1702 +1,1708 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno, urllib, urllib2
10 import errno, urllib, urllib2
11 import util, scmutil, changegroup, base85, error
11 import util, scmutil, changegroup, base85, error
12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
13 import lock as lockmod
13 import lock as lockmod
14 import streamclone
14 import streamclone
15 import sslutil
15 import tags
16 import tags
16 import url as urlmod
17 import url as urlmod
17
18
18 # Maps bundle compression human names to internal representation.
19 # Maps bundle compression human names to internal representation.
19 _bundlespeccompressions = {'none': None,
20 _bundlespeccompressions = {'none': None,
20 'bzip2': 'BZ',
21 'bzip2': 'BZ',
21 'gzip': 'GZ',
22 'gzip': 'GZ',
22 }
23 }
23
24
24 # Maps bundle version human names to changegroup versions.
25 # Maps bundle version human names to changegroup versions.
25 _bundlespeccgversions = {'v1': '01',
26 _bundlespeccgversions = {'v1': '01',
26 'v2': '02',
27 'v2': '02',
27 'bundle2': '02', #legacy
28 'bundle2': '02', #legacy
28 }
29 }
29
30
30 def parsebundlespec(repo, spec, strict=True):
31 def parsebundlespec(repo, spec, strict=True):
31 """Parse a bundle string specification into parts.
32 """Parse a bundle string specification into parts.
32
33
33 Bundle specifications denote a well-defined bundle/exchange format.
34 Bundle specifications denote a well-defined bundle/exchange format.
34 The content of a given specification should not change over time in
35 The content of a given specification should not change over time in
35 order to ensure that bundles produced by a newer version of Mercurial are
36 order to ensure that bundles produced by a newer version of Mercurial are
36 readable from an older version.
37 readable from an older version.
37
38
38 The string currently has the form:
39 The string currently has the form:
39
40
40 <compression>-<type>
41 <compression>-<type>
41
42
42 Where <compression> is one of the supported compression formats
43 Where <compression> is one of the supported compression formats
43 and <type> is (currently) a version string.
44 and <type> is (currently) a version string.
44
45
45 If ``strict`` is True (the default) <compression> is required. Otherwise,
46 If ``strict`` is True (the default) <compression> is required. Otherwise,
46 it is optional.
47 it is optional.
47
48
48 Returns a 2-tuple of (compression, version). Compression will be ``None``
49 Returns a 2-tuple of (compression, version). Compression will be ``None``
49 if not in strict mode and a compression isn't defined.
50 if not in strict mode and a compression isn't defined.
50
51
51 An ``InvalidBundleSpecification`` is raised when the specification is
52 An ``InvalidBundleSpecification`` is raised when the specification is
52 not syntactically well formed.
53 not syntactically well formed.
53
54
54 An ``UnsupportedBundleSpecification`` is raised when the compression or
55 An ``UnsupportedBundleSpecification`` is raised when the compression or
55 bundle type/version is not recognized.
56 bundle type/version is not recognized.
56
57
57 Note: this function will likely eventually return a more complex data
58 Note: this function will likely eventually return a more complex data
58 structure, including bundle2 part information.
59 structure, including bundle2 part information.
59 """
60 """
60 if strict and '-' not in spec:
61 if strict and '-' not in spec:
61 raise error.InvalidBundleSpecification(
62 raise error.InvalidBundleSpecification(
62 _('invalid bundle specification; '
63 _('invalid bundle specification; '
63 'must be prefixed with compression: %s') % spec)
64 'must be prefixed with compression: %s') % spec)
64
65
65 if '-' in spec:
66 if '-' in spec:
66 compression, version = spec.split('-', 1)
67 compression, version = spec.split('-', 1)
67
68
68 if compression not in _bundlespeccompressions:
69 if compression not in _bundlespeccompressions:
69 raise error.UnsupportedBundleSpecification(
70 raise error.UnsupportedBundleSpecification(
70 _('%s compression is not supported') % compression)
71 _('%s compression is not supported') % compression)
71
72
72 if version not in _bundlespeccgversions:
73 if version not in _bundlespeccgversions:
73 raise error.UnsupportedBundleSpecification(
74 raise error.UnsupportedBundleSpecification(
74 _('%s is not a recognized bundle version') % version)
75 _('%s is not a recognized bundle version') % version)
75 else:
76 else:
76 # Value could be just the compression or just the version, in which
77 # Value could be just the compression or just the version, in which
77 # case some defaults are assumed (but only when not in strict mode).
78 # case some defaults are assumed (but only when not in strict mode).
78 assert not strict
79 assert not strict
79
80
80 if spec in _bundlespeccompressions:
81 if spec in _bundlespeccompressions:
81 compression = spec
82 compression = spec
82 version = 'v1'
83 version = 'v1'
83 if 'generaldelta' in repo.requirements:
84 if 'generaldelta' in repo.requirements:
84 version = 'v2'
85 version = 'v2'
85 elif spec in _bundlespeccgversions:
86 elif spec in _bundlespeccgversions:
86 compression = 'bzip2'
87 compression = 'bzip2'
87 version = spec
88 version = spec
88 else:
89 else:
89 raise error.UnsupportedBundleSpecification(
90 raise error.UnsupportedBundleSpecification(
90 _('%s is not a recognized bundle specification') % spec)
91 _('%s is not a recognized bundle specification') % spec)
91
92
92 compression = _bundlespeccompressions[compression]
93 compression = _bundlespeccompressions[compression]
93 version = _bundlespeccgversions[version]
94 version = _bundlespeccgversions[version]
94 return compression, version
95 return compression, version
95
96
96 def readbundle(ui, fh, fname, vfs=None):
97 def readbundle(ui, fh, fname, vfs=None):
97 header = changegroup.readexactly(fh, 4)
98 header = changegroup.readexactly(fh, 4)
98
99
99 alg = None
100 alg = None
100 if not fname:
101 if not fname:
101 fname = "stream"
102 fname = "stream"
102 if not header.startswith('HG') and header.startswith('\0'):
103 if not header.startswith('HG') and header.startswith('\0'):
103 fh = changegroup.headerlessfixup(fh, header)
104 fh = changegroup.headerlessfixup(fh, header)
104 header = "HG10"
105 header = "HG10"
105 alg = 'UN'
106 alg = 'UN'
106 elif vfs:
107 elif vfs:
107 fname = vfs.join(fname)
108 fname = vfs.join(fname)
108
109
109 magic, version = header[0:2], header[2:4]
110 magic, version = header[0:2], header[2:4]
110
111
111 if magic != 'HG':
112 if magic != 'HG':
112 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
113 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
113 if version == '10':
114 if version == '10':
114 if alg is None:
115 if alg is None:
115 alg = changegroup.readexactly(fh, 2)
116 alg = changegroup.readexactly(fh, 2)
116 return changegroup.cg1unpacker(fh, alg)
117 return changegroup.cg1unpacker(fh, alg)
117 elif version.startswith('2'):
118 elif version.startswith('2'):
118 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
119 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
119 else:
120 else:
120 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
121 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
121
122
122 def buildobsmarkerspart(bundler, markers):
123 def buildobsmarkerspart(bundler, markers):
123 """add an obsmarker part to the bundler with <markers>
124 """add an obsmarker part to the bundler with <markers>
124
125
125 No part is created if markers is empty.
126 No part is created if markers is empty.
126 Raises ValueError if the bundler doesn't support any known obsmarker format.
127 Raises ValueError if the bundler doesn't support any known obsmarker format.
127 """
128 """
128 if markers:
129 if markers:
129 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
130 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
130 version = obsolete.commonversion(remoteversions)
131 version = obsolete.commonversion(remoteversions)
131 if version is None:
132 if version is None:
132 raise ValueError('bundler do not support common obsmarker format')
133 raise ValueError('bundler do not support common obsmarker format')
133 stream = obsolete.encodemarkers(markers, True, version=version)
134 stream = obsolete.encodemarkers(markers, True, version=version)
134 return bundler.newpart('obsmarkers', data=stream)
135 return bundler.newpart('obsmarkers', data=stream)
135 return None
136 return None
136
137
137 def _canusebundle2(op):
138 def _canusebundle2(op):
138 """return true if a pull/push can use bundle2
139 """return true if a pull/push can use bundle2
139
140
140 Feel free to nuke this function when we drop the experimental option"""
141 Feel free to nuke this function when we drop the experimental option"""
141 return (op.repo.ui.configbool('experimental', 'bundle2-exp', True)
142 return (op.repo.ui.configbool('experimental', 'bundle2-exp', True)
142 and op.remote.capable('bundle2'))
143 and op.remote.capable('bundle2'))
143
144
144
145
145 class pushoperation(object):
146 class pushoperation(object):
146 """A object that represent a single push operation
147 """A object that represent a single push operation
147
148
148 It purpose is to carry push related state and very common operation.
149 It purpose is to carry push related state and very common operation.
149
150
150 A new should be created at the beginning of each push and discarded
151 A new should be created at the beginning of each push and discarded
151 afterward.
152 afterward.
152 """
153 """
153
154
154 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
155 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
155 bookmarks=()):
156 bookmarks=()):
156 # repo we push from
157 # repo we push from
157 self.repo = repo
158 self.repo = repo
158 self.ui = repo.ui
159 self.ui = repo.ui
159 # repo we push to
160 # repo we push to
160 self.remote = remote
161 self.remote = remote
161 # force option provided
162 # force option provided
162 self.force = force
163 self.force = force
163 # revs to be pushed (None is "all")
164 # revs to be pushed (None is "all")
164 self.revs = revs
165 self.revs = revs
165 # bookmark explicitly pushed
166 # bookmark explicitly pushed
166 self.bookmarks = bookmarks
167 self.bookmarks = bookmarks
167 # allow push of new branch
168 # allow push of new branch
168 self.newbranch = newbranch
169 self.newbranch = newbranch
169 # did a local lock get acquired?
170 # did a local lock get acquired?
170 self.locallocked = None
171 self.locallocked = None
171 # step already performed
172 # step already performed
172 # (used to check what steps have been already performed through bundle2)
173 # (used to check what steps have been already performed through bundle2)
173 self.stepsdone = set()
174 self.stepsdone = set()
174 # Integer version of the changegroup push result
175 # Integer version of the changegroup push result
175 # - None means nothing to push
176 # - None means nothing to push
176 # - 0 means HTTP error
177 # - 0 means HTTP error
177 # - 1 means we pushed and remote head count is unchanged *or*
178 # - 1 means we pushed and remote head count is unchanged *or*
178 # we have outgoing changesets but refused to push
179 # we have outgoing changesets but refused to push
179 # - other values as described by addchangegroup()
180 # - other values as described by addchangegroup()
180 self.cgresult = None
181 self.cgresult = None
181 # Boolean value for the bookmark push
182 # Boolean value for the bookmark push
182 self.bkresult = None
183 self.bkresult = None
183 # discover.outgoing object (contains common and outgoing data)
184 # discover.outgoing object (contains common and outgoing data)
184 self.outgoing = None
185 self.outgoing = None
185 # all remote heads before the push
186 # all remote heads before the push
186 self.remoteheads = None
187 self.remoteheads = None
187 # testable as a boolean indicating if any nodes are missing locally.
188 # testable as a boolean indicating if any nodes are missing locally.
188 self.incoming = None
189 self.incoming = None
189 # phases changes that must be pushed along side the changesets
190 # phases changes that must be pushed along side the changesets
190 self.outdatedphases = None
191 self.outdatedphases = None
191 # phases changes that must be pushed if changeset push fails
192 # phases changes that must be pushed if changeset push fails
192 self.fallbackoutdatedphases = None
193 self.fallbackoutdatedphases = None
193 # outgoing obsmarkers
194 # outgoing obsmarkers
194 self.outobsmarkers = set()
195 self.outobsmarkers = set()
195 # outgoing bookmarks
196 # outgoing bookmarks
196 self.outbookmarks = []
197 self.outbookmarks = []
197 # transaction manager
198 # transaction manager
198 self.trmanager = None
199 self.trmanager = None
199 # map { pushkey partid -> callback handling failure}
200 # map { pushkey partid -> callback handling failure}
200 # used to handle exception from mandatory pushkey part failure
201 # used to handle exception from mandatory pushkey part failure
201 self.pkfailcb = {}
202 self.pkfailcb = {}
202
203
203 @util.propertycache
204 @util.propertycache
204 def futureheads(self):
205 def futureheads(self):
205 """future remote heads if the changeset push succeeds"""
206 """future remote heads if the changeset push succeeds"""
206 return self.outgoing.missingheads
207 return self.outgoing.missingheads
207
208
208 @util.propertycache
209 @util.propertycache
209 def fallbackheads(self):
210 def fallbackheads(self):
210 """future remote heads if the changeset push fails"""
211 """future remote heads if the changeset push fails"""
211 if self.revs is None:
212 if self.revs is None:
212 # not target to push, all common are relevant
213 # not target to push, all common are relevant
213 return self.outgoing.commonheads
214 return self.outgoing.commonheads
214 unfi = self.repo.unfiltered()
215 unfi = self.repo.unfiltered()
215 # I want cheads = heads(::missingheads and ::commonheads)
216 # I want cheads = heads(::missingheads and ::commonheads)
216 # (missingheads is revs with secret changeset filtered out)
217 # (missingheads is revs with secret changeset filtered out)
217 #
218 #
218 # This can be expressed as:
219 # This can be expressed as:
219 # cheads = ( (missingheads and ::commonheads)
220 # cheads = ( (missingheads and ::commonheads)
220 # + (commonheads and ::missingheads))"
221 # + (commonheads and ::missingheads))"
221 # )
222 # )
222 #
223 #
223 # while trying to push we already computed the following:
224 # while trying to push we already computed the following:
224 # common = (::commonheads)
225 # common = (::commonheads)
225 # missing = ((commonheads::missingheads) - commonheads)
226 # missing = ((commonheads::missingheads) - commonheads)
226 #
227 #
227 # We can pick:
228 # We can pick:
228 # * missingheads part of common (::commonheads)
229 # * missingheads part of common (::commonheads)
229 common = self.outgoing.common
230 common = self.outgoing.common
230 nm = self.repo.changelog.nodemap
231 nm = self.repo.changelog.nodemap
231 cheads = [node for node in self.revs if nm[node] in common]
232 cheads = [node for node in self.revs if nm[node] in common]
232 # and
233 # and
233 # * commonheads parents on missing
234 # * commonheads parents on missing
234 revset = unfi.set('%ln and parents(roots(%ln))',
235 revset = unfi.set('%ln and parents(roots(%ln))',
235 self.outgoing.commonheads,
236 self.outgoing.commonheads,
236 self.outgoing.missing)
237 self.outgoing.missing)
237 cheads.extend(c.node() for c in revset)
238 cheads.extend(c.node() for c in revset)
238 return cheads
239 return cheads
239
240
240 @property
241 @property
241 def commonheads(self):
242 def commonheads(self):
242 """set of all common heads after changeset bundle push"""
243 """set of all common heads after changeset bundle push"""
243 if self.cgresult:
244 if self.cgresult:
244 return self.futureheads
245 return self.futureheads
245 else:
246 else:
246 return self.fallbackheads
247 return self.fallbackheads
247
248
248 # mapping of message used when pushing bookmark
249 # mapping of message used when pushing bookmark
249 bookmsgmap = {'update': (_("updating bookmark %s\n"),
250 bookmsgmap = {'update': (_("updating bookmark %s\n"),
250 _('updating bookmark %s failed!\n')),
251 _('updating bookmark %s failed!\n')),
251 'export': (_("exporting bookmark %s\n"),
252 'export': (_("exporting bookmark %s\n"),
252 _('exporting bookmark %s failed!\n')),
253 _('exporting bookmark %s failed!\n')),
253 'delete': (_("deleting remote bookmark %s\n"),
254 'delete': (_("deleting remote bookmark %s\n"),
254 _('deleting remote bookmark %s failed!\n')),
255 _('deleting remote bookmark %s failed!\n')),
255 }
256 }
256
257
257
258
258 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
259 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
259 '''Push outgoing changesets (limited by revs) from a local
260 '''Push outgoing changesets (limited by revs) from a local
260 repository to remote. Return an integer:
261 repository to remote. Return an integer:
261 - None means nothing to push
262 - None means nothing to push
262 - 0 means HTTP error
263 - 0 means HTTP error
263 - 1 means we pushed and remote head count is unchanged *or*
264 - 1 means we pushed and remote head count is unchanged *or*
264 we have outgoing changesets but refused to push
265 we have outgoing changesets but refused to push
265 - other values as described by addchangegroup()
266 - other values as described by addchangegroup()
266 '''
267 '''
267 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
268 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
268 if pushop.remote.local():
269 if pushop.remote.local():
269 missing = (set(pushop.repo.requirements)
270 missing = (set(pushop.repo.requirements)
270 - pushop.remote.local().supported)
271 - pushop.remote.local().supported)
271 if missing:
272 if missing:
272 msg = _("required features are not"
273 msg = _("required features are not"
273 " supported in the destination:"
274 " supported in the destination:"
274 " %s") % (', '.join(sorted(missing)))
275 " %s") % (', '.join(sorted(missing)))
275 raise error.Abort(msg)
276 raise error.Abort(msg)
276
277
277 # there are two ways to push to remote repo:
278 # there are two ways to push to remote repo:
278 #
279 #
279 # addchangegroup assumes local user can lock remote
280 # addchangegroup assumes local user can lock remote
280 # repo (local filesystem, old ssh servers).
281 # repo (local filesystem, old ssh servers).
281 #
282 #
282 # unbundle assumes local user cannot lock remote repo (new ssh
283 # unbundle assumes local user cannot lock remote repo (new ssh
283 # servers, http servers).
284 # servers, http servers).
284
285
285 if not pushop.remote.canpush():
286 if not pushop.remote.canpush():
286 raise error.Abort(_("destination does not support push"))
287 raise error.Abort(_("destination does not support push"))
287 # get local lock as we might write phase data
288 # get local lock as we might write phase data
288 localwlock = locallock = None
289 localwlock = locallock = None
289 try:
290 try:
290 # bundle2 push may receive a reply bundle touching bookmarks or other
291 # bundle2 push may receive a reply bundle touching bookmarks or other
291 # things requiring the wlock. Take it now to ensure proper ordering.
292 # things requiring the wlock. Take it now to ensure proper ordering.
292 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
293 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
293 if _canusebundle2(pushop) and maypushback:
294 if _canusebundle2(pushop) and maypushback:
294 localwlock = pushop.repo.wlock()
295 localwlock = pushop.repo.wlock()
295 locallock = pushop.repo.lock()
296 locallock = pushop.repo.lock()
296 pushop.locallocked = True
297 pushop.locallocked = True
297 except IOError as err:
298 except IOError as err:
298 pushop.locallocked = False
299 pushop.locallocked = False
299 if err.errno != errno.EACCES:
300 if err.errno != errno.EACCES:
300 raise
301 raise
301 # source repo cannot be locked.
302 # source repo cannot be locked.
302 # We do not abort the push, but just disable the local phase
303 # We do not abort the push, but just disable the local phase
303 # synchronisation.
304 # synchronisation.
304 msg = 'cannot lock source repository: %s\n' % err
305 msg = 'cannot lock source repository: %s\n' % err
305 pushop.ui.debug(msg)
306 pushop.ui.debug(msg)
306 try:
307 try:
307 if pushop.locallocked:
308 if pushop.locallocked:
308 pushop.trmanager = transactionmanager(repo,
309 pushop.trmanager = transactionmanager(repo,
309 'push-response',
310 'push-response',
310 pushop.remote.url())
311 pushop.remote.url())
311 pushop.repo.checkpush(pushop)
312 pushop.repo.checkpush(pushop)
312 lock = None
313 lock = None
313 unbundle = pushop.remote.capable('unbundle')
314 unbundle = pushop.remote.capable('unbundle')
314 if not unbundle:
315 if not unbundle:
315 lock = pushop.remote.lock()
316 lock = pushop.remote.lock()
316 try:
317 try:
317 _pushdiscovery(pushop)
318 _pushdiscovery(pushop)
318 if _canusebundle2(pushop):
319 if _canusebundle2(pushop):
319 _pushbundle2(pushop)
320 _pushbundle2(pushop)
320 _pushchangeset(pushop)
321 _pushchangeset(pushop)
321 _pushsyncphase(pushop)
322 _pushsyncphase(pushop)
322 _pushobsolete(pushop)
323 _pushobsolete(pushop)
323 _pushbookmark(pushop)
324 _pushbookmark(pushop)
324 finally:
325 finally:
325 if lock is not None:
326 if lock is not None:
326 lock.release()
327 lock.release()
327 if pushop.trmanager:
328 if pushop.trmanager:
328 pushop.trmanager.close()
329 pushop.trmanager.close()
329 finally:
330 finally:
330 if pushop.trmanager:
331 if pushop.trmanager:
331 pushop.trmanager.release()
332 pushop.trmanager.release()
332 if locallock is not None:
333 if locallock is not None:
333 locallock.release()
334 locallock.release()
334 if localwlock is not None:
335 if localwlock is not None:
335 localwlock.release()
336 localwlock.release()
336
337
337 return pushop
338 return pushop
338
339
339 # list of steps to perform discovery before push
340 # list of steps to perform discovery before push
340 pushdiscoveryorder = []
341 pushdiscoveryorder = []
341
342
342 # Mapping between step name and function
343 # Mapping between step name and function
343 #
344 #
344 # This exists to help extensions wrap steps if necessary
345 # This exists to help extensions wrap steps if necessary
345 pushdiscoverymapping = {}
346 pushdiscoverymapping = {}
346
347
347 def pushdiscovery(stepname):
348 def pushdiscovery(stepname):
348 """decorator for function performing discovery before push
349 """decorator for function performing discovery before push
349
350
350 The function is added to the step -> function mapping and appended to the
351 The function is added to the step -> function mapping and appended to the
351 list of steps. Beware that decorated function will be added in order (this
352 list of steps. Beware that decorated function will be added in order (this
352 may matter).
353 may matter).
353
354
354 You can only use this decorator for a new step, if you want to wrap a step
355 You can only use this decorator for a new step, if you want to wrap a step
355 from an extension, change the pushdiscovery dictionary directly."""
356 from an extension, change the pushdiscovery dictionary directly."""
356 def dec(func):
357 def dec(func):
357 assert stepname not in pushdiscoverymapping
358 assert stepname not in pushdiscoverymapping
358 pushdiscoverymapping[stepname] = func
359 pushdiscoverymapping[stepname] = func
359 pushdiscoveryorder.append(stepname)
360 pushdiscoveryorder.append(stepname)
360 return func
361 return func
361 return dec
362 return dec
362
363
363 def _pushdiscovery(pushop):
364 def _pushdiscovery(pushop):
364 """Run all discovery steps"""
365 """Run all discovery steps"""
365 for stepname in pushdiscoveryorder:
366 for stepname in pushdiscoveryorder:
366 step = pushdiscoverymapping[stepname]
367 step = pushdiscoverymapping[stepname]
367 step(pushop)
368 step(pushop)
368
369
369 @pushdiscovery('changeset')
370 @pushdiscovery('changeset')
370 def _pushdiscoverychangeset(pushop):
371 def _pushdiscoverychangeset(pushop):
371 """discover the changeset that need to be pushed"""
372 """discover the changeset that need to be pushed"""
372 fci = discovery.findcommonincoming
373 fci = discovery.findcommonincoming
373 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
374 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
374 common, inc, remoteheads = commoninc
375 common, inc, remoteheads = commoninc
375 fco = discovery.findcommonoutgoing
376 fco = discovery.findcommonoutgoing
376 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
377 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
377 commoninc=commoninc, force=pushop.force)
378 commoninc=commoninc, force=pushop.force)
378 pushop.outgoing = outgoing
379 pushop.outgoing = outgoing
379 pushop.remoteheads = remoteheads
380 pushop.remoteheads = remoteheads
380 pushop.incoming = inc
381 pushop.incoming = inc
381
382
382 @pushdiscovery('phase')
383 @pushdiscovery('phase')
383 def _pushdiscoveryphase(pushop):
384 def _pushdiscoveryphase(pushop):
384 """discover the phase that needs to be pushed
385 """discover the phase that needs to be pushed
385
386
386 (computed for both success and failure case for changesets push)"""
387 (computed for both success and failure case for changesets push)"""
387 outgoing = pushop.outgoing
388 outgoing = pushop.outgoing
388 unfi = pushop.repo.unfiltered()
389 unfi = pushop.repo.unfiltered()
389 remotephases = pushop.remote.listkeys('phases')
390 remotephases = pushop.remote.listkeys('phases')
390 publishing = remotephases.get('publishing', False)
391 publishing = remotephases.get('publishing', False)
391 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
392 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
392 and remotephases # server supports phases
393 and remotephases # server supports phases
393 and not pushop.outgoing.missing # no changesets to be pushed
394 and not pushop.outgoing.missing # no changesets to be pushed
394 and publishing):
395 and publishing):
395 # When:
396 # When:
396 # - this is a subrepo push
397 # - this is a subrepo push
397 # - and remote support phase
398 # - and remote support phase
398 # - and no changeset are to be pushed
399 # - and no changeset are to be pushed
399 # - and remote is publishing
400 # - and remote is publishing
400 # We may be in issue 3871 case!
401 # We may be in issue 3871 case!
401 # We drop the possible phase synchronisation done by
402 # We drop the possible phase synchronisation done by
402 # courtesy to publish changesets possibly locally draft
403 # courtesy to publish changesets possibly locally draft
403 # on the remote.
404 # on the remote.
404 remotephases = {'publishing': 'True'}
405 remotephases = {'publishing': 'True'}
405 ana = phases.analyzeremotephases(pushop.repo,
406 ana = phases.analyzeremotephases(pushop.repo,
406 pushop.fallbackheads,
407 pushop.fallbackheads,
407 remotephases)
408 remotephases)
408 pheads, droots = ana
409 pheads, droots = ana
409 extracond = ''
410 extracond = ''
410 if not publishing:
411 if not publishing:
411 extracond = ' and public()'
412 extracond = ' and public()'
412 revset = 'heads((%%ln::%%ln) %s)' % extracond
413 revset = 'heads((%%ln::%%ln) %s)' % extracond
413 # Get the list of all revs draft on remote by public here.
414 # Get the list of all revs draft on remote by public here.
414 # XXX Beware that revset break if droots is not strictly
415 # XXX Beware that revset break if droots is not strictly
415 # XXX root we may want to ensure it is but it is costly
416 # XXX root we may want to ensure it is but it is costly
416 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
417 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
417 if not outgoing.missing:
418 if not outgoing.missing:
418 future = fallback
419 future = fallback
419 else:
420 else:
420 # adds changeset we are going to push as draft
421 # adds changeset we are going to push as draft
421 #
422 #
422 # should not be necessary for publishing server, but because of an
423 # should not be necessary for publishing server, but because of an
423 # issue fixed in xxxxx we have to do it anyway.
424 # issue fixed in xxxxx we have to do it anyway.
424 fdroots = list(unfi.set('roots(%ln + %ln::)',
425 fdroots = list(unfi.set('roots(%ln + %ln::)',
425 outgoing.missing, droots))
426 outgoing.missing, droots))
426 fdroots = [f.node() for f in fdroots]
427 fdroots = [f.node() for f in fdroots]
427 future = list(unfi.set(revset, fdroots, pushop.futureheads))
428 future = list(unfi.set(revset, fdroots, pushop.futureheads))
428 pushop.outdatedphases = future
429 pushop.outdatedphases = future
429 pushop.fallbackoutdatedphases = fallback
430 pushop.fallbackoutdatedphases = fallback
430
431
431 @pushdiscovery('obsmarker')
432 @pushdiscovery('obsmarker')
432 def _pushdiscoveryobsmarkers(pushop):
433 def _pushdiscoveryobsmarkers(pushop):
433 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
434 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
434 and pushop.repo.obsstore
435 and pushop.repo.obsstore
435 and 'obsolete' in pushop.remote.listkeys('namespaces')):
436 and 'obsolete' in pushop.remote.listkeys('namespaces')):
436 repo = pushop.repo
437 repo = pushop.repo
437 # very naive computation, that can be quite expensive on big repo.
438 # very naive computation, that can be quite expensive on big repo.
438 # However: evolution is currently slow on them anyway.
439 # However: evolution is currently slow on them anyway.
439 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
440 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
440 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
441 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
441
442
442 @pushdiscovery('bookmarks')
443 @pushdiscovery('bookmarks')
443 def _pushdiscoverybookmarks(pushop):
444 def _pushdiscoverybookmarks(pushop):
444 ui = pushop.ui
445 ui = pushop.ui
445 repo = pushop.repo.unfiltered()
446 repo = pushop.repo.unfiltered()
446 remote = pushop.remote
447 remote = pushop.remote
447 ui.debug("checking for updated bookmarks\n")
448 ui.debug("checking for updated bookmarks\n")
448 ancestors = ()
449 ancestors = ()
449 if pushop.revs:
450 if pushop.revs:
450 revnums = map(repo.changelog.rev, pushop.revs)
451 revnums = map(repo.changelog.rev, pushop.revs)
451 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
452 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
452 remotebookmark = remote.listkeys('bookmarks')
453 remotebookmark = remote.listkeys('bookmarks')
453
454
454 explicit = set(pushop.bookmarks)
455 explicit = set(pushop.bookmarks)
455
456
456 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
457 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
457 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
458 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
458 for b, scid, dcid in advsrc:
459 for b, scid, dcid in advsrc:
459 if b in explicit:
460 if b in explicit:
460 explicit.remove(b)
461 explicit.remove(b)
461 if not ancestors or repo[scid].rev() in ancestors:
462 if not ancestors or repo[scid].rev() in ancestors:
462 pushop.outbookmarks.append((b, dcid, scid))
463 pushop.outbookmarks.append((b, dcid, scid))
463 # search added bookmark
464 # search added bookmark
464 for b, scid, dcid in addsrc:
465 for b, scid, dcid in addsrc:
465 if b in explicit:
466 if b in explicit:
466 explicit.remove(b)
467 explicit.remove(b)
467 pushop.outbookmarks.append((b, '', scid))
468 pushop.outbookmarks.append((b, '', scid))
468 # search for overwritten bookmark
469 # search for overwritten bookmark
469 for b, scid, dcid in advdst + diverge + differ:
470 for b, scid, dcid in advdst + diverge + differ:
470 if b in explicit:
471 if b in explicit:
471 explicit.remove(b)
472 explicit.remove(b)
472 pushop.outbookmarks.append((b, dcid, scid))
473 pushop.outbookmarks.append((b, dcid, scid))
473 # search for bookmark to delete
474 # search for bookmark to delete
474 for b, scid, dcid in adddst:
475 for b, scid, dcid in adddst:
475 if b in explicit:
476 if b in explicit:
476 explicit.remove(b)
477 explicit.remove(b)
477 # treat as "deleted locally"
478 # treat as "deleted locally"
478 pushop.outbookmarks.append((b, dcid, ''))
479 pushop.outbookmarks.append((b, dcid, ''))
479 # identical bookmarks shouldn't get reported
480 # identical bookmarks shouldn't get reported
480 for b, scid, dcid in same:
481 for b, scid, dcid in same:
481 if b in explicit:
482 if b in explicit:
482 explicit.remove(b)
483 explicit.remove(b)
483
484
484 if explicit:
485 if explicit:
485 explicit = sorted(explicit)
486 explicit = sorted(explicit)
486 # we should probably list all of them
487 # we should probably list all of them
487 ui.warn(_('bookmark %s does not exist on the local '
488 ui.warn(_('bookmark %s does not exist on the local '
488 'or remote repository!\n') % explicit[0])
489 'or remote repository!\n') % explicit[0])
489 pushop.bkresult = 2
490 pushop.bkresult = 2
490
491
491 pushop.outbookmarks.sort()
492 pushop.outbookmarks.sort()
492
493
493 def _pushcheckoutgoing(pushop):
494 def _pushcheckoutgoing(pushop):
494 outgoing = pushop.outgoing
495 outgoing = pushop.outgoing
495 unfi = pushop.repo.unfiltered()
496 unfi = pushop.repo.unfiltered()
496 if not outgoing.missing:
497 if not outgoing.missing:
497 # nothing to push
498 # nothing to push
498 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
499 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
499 return False
500 return False
500 # something to push
501 # something to push
501 if not pushop.force:
502 if not pushop.force:
502 # if repo.obsstore == False --> no obsolete
503 # if repo.obsstore == False --> no obsolete
503 # then, save the iteration
504 # then, save the iteration
504 if unfi.obsstore:
505 if unfi.obsstore:
505 # this message are here for 80 char limit reason
506 # this message are here for 80 char limit reason
506 mso = _("push includes obsolete changeset: %s!")
507 mso = _("push includes obsolete changeset: %s!")
507 mst = {"unstable": _("push includes unstable changeset: %s!"),
508 mst = {"unstable": _("push includes unstable changeset: %s!"),
508 "bumped": _("push includes bumped changeset: %s!"),
509 "bumped": _("push includes bumped changeset: %s!"),
509 "divergent": _("push includes divergent changeset: %s!")}
510 "divergent": _("push includes divergent changeset: %s!")}
510 # If we are to push if there is at least one
511 # If we are to push if there is at least one
511 # obsolete or unstable changeset in missing, at
512 # obsolete or unstable changeset in missing, at
512 # least one of the missinghead will be obsolete or
513 # least one of the missinghead will be obsolete or
513 # unstable. So checking heads only is ok
514 # unstable. So checking heads only is ok
514 for node in outgoing.missingheads:
515 for node in outgoing.missingheads:
515 ctx = unfi[node]
516 ctx = unfi[node]
516 if ctx.obsolete():
517 if ctx.obsolete():
517 raise error.Abort(mso % ctx)
518 raise error.Abort(mso % ctx)
518 elif ctx.troubled():
519 elif ctx.troubled():
519 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
520 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
520
521
521 # internal config: bookmarks.pushing
522 # internal config: bookmarks.pushing
522 newbm = pushop.ui.configlist('bookmarks', 'pushing')
523 newbm = pushop.ui.configlist('bookmarks', 'pushing')
523 discovery.checkheads(unfi, pushop.remote, outgoing,
524 discovery.checkheads(unfi, pushop.remote, outgoing,
524 pushop.remoteheads,
525 pushop.remoteheads,
525 pushop.newbranch,
526 pushop.newbranch,
526 bool(pushop.incoming),
527 bool(pushop.incoming),
527 newbm)
528 newbm)
528 return True
529 return True
529
530
530 # List of names of steps to perform for an outgoing bundle2, order matters.
531 # List of names of steps to perform for an outgoing bundle2, order matters.
531 b2partsgenorder = []
532 b2partsgenorder = []
532
533
533 # Mapping between step name and function
534 # Mapping between step name and function
534 #
535 #
535 # This exists to help extensions wrap steps if necessary
536 # This exists to help extensions wrap steps if necessary
536 b2partsgenmapping = {}
537 b2partsgenmapping = {}
537
538
538 def b2partsgenerator(stepname, idx=None):
539 def b2partsgenerator(stepname, idx=None):
539 """decorator for function generating bundle2 part
540 """decorator for function generating bundle2 part
540
541
541 The function is added to the step -> function mapping and appended to the
542 The function is added to the step -> function mapping and appended to the
542 list of steps. Beware that decorated functions will be added in order
543 list of steps. Beware that decorated functions will be added in order
543 (this may matter).
544 (this may matter).
544
545
545 You can only use this decorator for new steps, if you want to wrap a step
546 You can only use this decorator for new steps, if you want to wrap a step
546 from an extension, attack the b2partsgenmapping dictionary directly."""
547 from an extension, attack the b2partsgenmapping dictionary directly."""
547 def dec(func):
548 def dec(func):
548 assert stepname not in b2partsgenmapping
549 assert stepname not in b2partsgenmapping
549 b2partsgenmapping[stepname] = func
550 b2partsgenmapping[stepname] = func
550 if idx is None:
551 if idx is None:
551 b2partsgenorder.append(stepname)
552 b2partsgenorder.append(stepname)
552 else:
553 else:
553 b2partsgenorder.insert(idx, stepname)
554 b2partsgenorder.insert(idx, stepname)
554 return func
555 return func
555 return dec
556 return dec
556
557
557 def _pushb2ctxcheckheads(pushop, bundler):
558 def _pushb2ctxcheckheads(pushop, bundler):
558 """Generate race condition checking parts
559 """Generate race condition checking parts
559
560
560 Exists as an indepedent function to aid extensions
561 Exists as an indepedent function to aid extensions
561 """
562 """
562 if not pushop.force:
563 if not pushop.force:
563 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
564 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
564
565
565 @b2partsgenerator('changeset')
566 @b2partsgenerator('changeset')
566 def _pushb2ctx(pushop, bundler):
567 def _pushb2ctx(pushop, bundler):
567 """handle changegroup push through bundle2
568 """handle changegroup push through bundle2
568
569
569 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
570 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
570 """
571 """
571 if 'changesets' in pushop.stepsdone:
572 if 'changesets' in pushop.stepsdone:
572 return
573 return
573 pushop.stepsdone.add('changesets')
574 pushop.stepsdone.add('changesets')
574 # Send known heads to the server for race detection.
575 # Send known heads to the server for race detection.
575 if not _pushcheckoutgoing(pushop):
576 if not _pushcheckoutgoing(pushop):
576 return
577 return
577 pushop.repo.prepushoutgoinghooks(pushop.repo,
578 pushop.repo.prepushoutgoinghooks(pushop.repo,
578 pushop.remote,
579 pushop.remote,
579 pushop.outgoing)
580 pushop.outgoing)
580
581
581 _pushb2ctxcheckheads(pushop, bundler)
582 _pushb2ctxcheckheads(pushop, bundler)
582
583
583 b2caps = bundle2.bundle2caps(pushop.remote)
584 b2caps = bundle2.bundle2caps(pushop.remote)
584 version = None
585 version = None
585 cgversions = b2caps.get('changegroup')
586 cgversions = b2caps.get('changegroup')
586 if not cgversions: # 3.1 and 3.2 ship with an empty value
587 if not cgversions: # 3.1 and 3.2 ship with an empty value
587 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
588 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
588 pushop.outgoing)
589 pushop.outgoing)
589 else:
590 else:
590 cgversions = [v for v in cgversions if v in changegroup.packermap]
591 cgversions = [v for v in cgversions if v in changegroup.packermap]
591 if not cgversions:
592 if not cgversions:
592 raise ValueError(_('no common changegroup version'))
593 raise ValueError(_('no common changegroup version'))
593 version = max(cgversions)
594 version = max(cgversions)
594 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
595 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
595 pushop.outgoing,
596 pushop.outgoing,
596 version=version)
597 version=version)
597 cgpart = bundler.newpart('changegroup', data=cg)
598 cgpart = bundler.newpart('changegroup', data=cg)
598 if version is not None:
599 if version is not None:
599 cgpart.addparam('version', version)
600 cgpart.addparam('version', version)
600 def handlereply(op):
601 def handlereply(op):
601 """extract addchangegroup returns from server reply"""
602 """extract addchangegroup returns from server reply"""
602 cgreplies = op.records.getreplies(cgpart.id)
603 cgreplies = op.records.getreplies(cgpart.id)
603 assert len(cgreplies['changegroup']) == 1
604 assert len(cgreplies['changegroup']) == 1
604 pushop.cgresult = cgreplies['changegroup'][0]['return']
605 pushop.cgresult = cgreplies['changegroup'][0]['return']
605 return handlereply
606 return handlereply
606
607
607 @b2partsgenerator('phase')
608 @b2partsgenerator('phase')
608 def _pushb2phases(pushop, bundler):
609 def _pushb2phases(pushop, bundler):
609 """handle phase push through bundle2"""
610 """handle phase push through bundle2"""
610 if 'phases' in pushop.stepsdone:
611 if 'phases' in pushop.stepsdone:
611 return
612 return
612 b2caps = bundle2.bundle2caps(pushop.remote)
613 b2caps = bundle2.bundle2caps(pushop.remote)
613 if not 'pushkey' in b2caps:
614 if not 'pushkey' in b2caps:
614 return
615 return
615 pushop.stepsdone.add('phases')
616 pushop.stepsdone.add('phases')
616 part2node = []
617 part2node = []
617
618
618 def handlefailure(pushop, exc):
619 def handlefailure(pushop, exc):
619 targetid = int(exc.partid)
620 targetid = int(exc.partid)
620 for partid, node in part2node:
621 for partid, node in part2node:
621 if partid == targetid:
622 if partid == targetid:
622 raise error.Abort(_('updating %s to public failed') % node)
623 raise error.Abort(_('updating %s to public failed') % node)
623
624
624 enc = pushkey.encode
625 enc = pushkey.encode
625 for newremotehead in pushop.outdatedphases:
626 for newremotehead in pushop.outdatedphases:
626 part = bundler.newpart('pushkey')
627 part = bundler.newpart('pushkey')
627 part.addparam('namespace', enc('phases'))
628 part.addparam('namespace', enc('phases'))
628 part.addparam('key', enc(newremotehead.hex()))
629 part.addparam('key', enc(newremotehead.hex()))
629 part.addparam('old', enc(str(phases.draft)))
630 part.addparam('old', enc(str(phases.draft)))
630 part.addparam('new', enc(str(phases.public)))
631 part.addparam('new', enc(str(phases.public)))
631 part2node.append((part.id, newremotehead))
632 part2node.append((part.id, newremotehead))
632 pushop.pkfailcb[part.id] = handlefailure
633 pushop.pkfailcb[part.id] = handlefailure
633
634
634 def handlereply(op):
635 def handlereply(op):
635 for partid, node in part2node:
636 for partid, node in part2node:
636 partrep = op.records.getreplies(partid)
637 partrep = op.records.getreplies(partid)
637 results = partrep['pushkey']
638 results = partrep['pushkey']
638 assert len(results) <= 1
639 assert len(results) <= 1
639 msg = None
640 msg = None
640 if not results:
641 if not results:
641 msg = _('server ignored update of %s to public!\n') % node
642 msg = _('server ignored update of %s to public!\n') % node
642 elif not int(results[0]['return']):
643 elif not int(results[0]['return']):
643 msg = _('updating %s to public failed!\n') % node
644 msg = _('updating %s to public failed!\n') % node
644 if msg is not None:
645 if msg is not None:
645 pushop.ui.warn(msg)
646 pushop.ui.warn(msg)
646 return handlereply
647 return handlereply
647
648
648 @b2partsgenerator('obsmarkers')
649 @b2partsgenerator('obsmarkers')
649 def _pushb2obsmarkers(pushop, bundler):
650 def _pushb2obsmarkers(pushop, bundler):
650 if 'obsmarkers' in pushop.stepsdone:
651 if 'obsmarkers' in pushop.stepsdone:
651 return
652 return
652 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
653 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
653 if obsolete.commonversion(remoteversions) is None:
654 if obsolete.commonversion(remoteversions) is None:
654 return
655 return
655 pushop.stepsdone.add('obsmarkers')
656 pushop.stepsdone.add('obsmarkers')
656 if pushop.outobsmarkers:
657 if pushop.outobsmarkers:
657 markers = sorted(pushop.outobsmarkers)
658 markers = sorted(pushop.outobsmarkers)
658 buildobsmarkerspart(bundler, markers)
659 buildobsmarkerspart(bundler, markers)
659
660
660 @b2partsgenerator('bookmarks')
661 @b2partsgenerator('bookmarks')
661 def _pushb2bookmarks(pushop, bundler):
662 def _pushb2bookmarks(pushop, bundler):
662 """handle bookmark push through bundle2"""
663 """handle bookmark push through bundle2"""
663 if 'bookmarks' in pushop.stepsdone:
664 if 'bookmarks' in pushop.stepsdone:
664 return
665 return
665 b2caps = bundle2.bundle2caps(pushop.remote)
666 b2caps = bundle2.bundle2caps(pushop.remote)
666 if 'pushkey' not in b2caps:
667 if 'pushkey' not in b2caps:
667 return
668 return
668 pushop.stepsdone.add('bookmarks')
669 pushop.stepsdone.add('bookmarks')
669 part2book = []
670 part2book = []
670 enc = pushkey.encode
671 enc = pushkey.encode
671
672
672 def handlefailure(pushop, exc):
673 def handlefailure(pushop, exc):
673 targetid = int(exc.partid)
674 targetid = int(exc.partid)
674 for partid, book, action in part2book:
675 for partid, book, action in part2book:
675 if partid == targetid:
676 if partid == targetid:
676 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
677 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
677 # we should not be called for part we did not generated
678 # we should not be called for part we did not generated
678 assert False
679 assert False
679
680
680 for book, old, new in pushop.outbookmarks:
681 for book, old, new in pushop.outbookmarks:
681 part = bundler.newpart('pushkey')
682 part = bundler.newpart('pushkey')
682 part.addparam('namespace', enc('bookmarks'))
683 part.addparam('namespace', enc('bookmarks'))
683 part.addparam('key', enc(book))
684 part.addparam('key', enc(book))
684 part.addparam('old', enc(old))
685 part.addparam('old', enc(old))
685 part.addparam('new', enc(new))
686 part.addparam('new', enc(new))
686 action = 'update'
687 action = 'update'
687 if not old:
688 if not old:
688 action = 'export'
689 action = 'export'
689 elif not new:
690 elif not new:
690 action = 'delete'
691 action = 'delete'
691 part2book.append((part.id, book, action))
692 part2book.append((part.id, book, action))
692 pushop.pkfailcb[part.id] = handlefailure
693 pushop.pkfailcb[part.id] = handlefailure
693
694
694 def handlereply(op):
695 def handlereply(op):
695 ui = pushop.ui
696 ui = pushop.ui
696 for partid, book, action in part2book:
697 for partid, book, action in part2book:
697 partrep = op.records.getreplies(partid)
698 partrep = op.records.getreplies(partid)
698 results = partrep['pushkey']
699 results = partrep['pushkey']
699 assert len(results) <= 1
700 assert len(results) <= 1
700 if not results:
701 if not results:
701 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
702 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
702 else:
703 else:
703 ret = int(results[0]['return'])
704 ret = int(results[0]['return'])
704 if ret:
705 if ret:
705 ui.status(bookmsgmap[action][0] % book)
706 ui.status(bookmsgmap[action][0] % book)
706 else:
707 else:
707 ui.warn(bookmsgmap[action][1] % book)
708 ui.warn(bookmsgmap[action][1] % book)
708 if pushop.bkresult is not None:
709 if pushop.bkresult is not None:
709 pushop.bkresult = 1
710 pushop.bkresult = 1
710 return handlereply
711 return handlereply
711
712
712
713
713 def _pushbundle2(pushop):
714 def _pushbundle2(pushop):
714 """push data to the remote using bundle2
715 """push data to the remote using bundle2
715
716
716 The only currently supported type of data is changegroup but this will
717 The only currently supported type of data is changegroup but this will
717 evolve in the future."""
718 evolve in the future."""
718 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
719 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
719 pushback = (pushop.trmanager
720 pushback = (pushop.trmanager
720 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
721 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
721
722
722 # create reply capability
723 # create reply capability
723 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
724 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
724 allowpushback=pushback))
725 allowpushback=pushback))
725 bundler.newpart('replycaps', data=capsblob)
726 bundler.newpart('replycaps', data=capsblob)
726 replyhandlers = []
727 replyhandlers = []
727 for partgenname in b2partsgenorder:
728 for partgenname in b2partsgenorder:
728 partgen = b2partsgenmapping[partgenname]
729 partgen = b2partsgenmapping[partgenname]
729 ret = partgen(pushop, bundler)
730 ret = partgen(pushop, bundler)
730 if callable(ret):
731 if callable(ret):
731 replyhandlers.append(ret)
732 replyhandlers.append(ret)
732 # do not push if nothing to push
733 # do not push if nothing to push
733 if bundler.nbparts <= 1:
734 if bundler.nbparts <= 1:
734 return
735 return
735 stream = util.chunkbuffer(bundler.getchunks())
736 stream = util.chunkbuffer(bundler.getchunks())
736 try:
737 try:
737 try:
738 try:
738 reply = pushop.remote.unbundle(stream, ['force'], 'push')
739 reply = pushop.remote.unbundle(stream, ['force'], 'push')
739 except error.BundleValueError as exc:
740 except error.BundleValueError as exc:
740 raise error.Abort('missing support for %s' % exc)
741 raise error.Abort('missing support for %s' % exc)
741 try:
742 try:
742 trgetter = None
743 trgetter = None
743 if pushback:
744 if pushback:
744 trgetter = pushop.trmanager.transaction
745 trgetter = pushop.trmanager.transaction
745 op = bundle2.processbundle(pushop.repo, reply, trgetter)
746 op = bundle2.processbundle(pushop.repo, reply, trgetter)
746 except error.BundleValueError as exc:
747 except error.BundleValueError as exc:
747 raise error.Abort('missing support for %s' % exc)
748 raise error.Abort('missing support for %s' % exc)
748 except error.PushkeyFailed as exc:
749 except error.PushkeyFailed as exc:
749 partid = int(exc.partid)
750 partid = int(exc.partid)
750 if partid not in pushop.pkfailcb:
751 if partid not in pushop.pkfailcb:
751 raise
752 raise
752 pushop.pkfailcb[partid](pushop, exc)
753 pushop.pkfailcb[partid](pushop, exc)
753 for rephand in replyhandlers:
754 for rephand in replyhandlers:
754 rephand(op)
755 rephand(op)
755
756
756 def _pushchangeset(pushop):
757 def _pushchangeset(pushop):
757 """Make the actual push of changeset bundle to remote repo"""
758 """Make the actual push of changeset bundle to remote repo"""
758 if 'changesets' in pushop.stepsdone:
759 if 'changesets' in pushop.stepsdone:
759 return
760 return
760 pushop.stepsdone.add('changesets')
761 pushop.stepsdone.add('changesets')
761 if not _pushcheckoutgoing(pushop):
762 if not _pushcheckoutgoing(pushop):
762 return
763 return
763 pushop.repo.prepushoutgoinghooks(pushop.repo,
764 pushop.repo.prepushoutgoinghooks(pushop.repo,
764 pushop.remote,
765 pushop.remote,
765 pushop.outgoing)
766 pushop.outgoing)
766 outgoing = pushop.outgoing
767 outgoing = pushop.outgoing
767 unbundle = pushop.remote.capable('unbundle')
768 unbundle = pushop.remote.capable('unbundle')
768 # TODO: get bundlecaps from remote
769 # TODO: get bundlecaps from remote
769 bundlecaps = None
770 bundlecaps = None
770 # create a changegroup from local
771 # create a changegroup from local
771 if pushop.revs is None and not (outgoing.excluded
772 if pushop.revs is None and not (outgoing.excluded
772 or pushop.repo.changelog.filteredrevs):
773 or pushop.repo.changelog.filteredrevs):
773 # push everything,
774 # push everything,
774 # use the fast path, no race possible on push
775 # use the fast path, no race possible on push
775 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
776 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
776 cg = changegroup.getsubset(pushop.repo,
777 cg = changegroup.getsubset(pushop.repo,
777 outgoing,
778 outgoing,
778 bundler,
779 bundler,
779 'push',
780 'push',
780 fastpath=True)
781 fastpath=True)
781 else:
782 else:
782 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
783 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
783 bundlecaps)
784 bundlecaps)
784
785
785 # apply changegroup to remote
786 # apply changegroup to remote
786 if unbundle:
787 if unbundle:
787 # local repo finds heads on server, finds out what
788 # local repo finds heads on server, finds out what
788 # revs it must push. once revs transferred, if server
789 # revs it must push. once revs transferred, if server
789 # finds it has different heads (someone else won
790 # finds it has different heads (someone else won
790 # commit/push race), server aborts.
791 # commit/push race), server aborts.
791 if pushop.force:
792 if pushop.force:
792 remoteheads = ['force']
793 remoteheads = ['force']
793 else:
794 else:
794 remoteheads = pushop.remoteheads
795 remoteheads = pushop.remoteheads
795 # ssh: return remote's addchangegroup()
796 # ssh: return remote's addchangegroup()
796 # http: return remote's addchangegroup() or 0 for error
797 # http: return remote's addchangegroup() or 0 for error
797 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
798 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
798 pushop.repo.url())
799 pushop.repo.url())
799 else:
800 else:
800 # we return an integer indicating remote head count
801 # we return an integer indicating remote head count
801 # change
802 # change
802 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
803 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
803 pushop.repo.url())
804 pushop.repo.url())
804
805
805 def _pushsyncphase(pushop):
806 def _pushsyncphase(pushop):
806 """synchronise phase information locally and remotely"""
807 """synchronise phase information locally and remotely"""
807 cheads = pushop.commonheads
808 cheads = pushop.commonheads
808 # even when we don't push, exchanging phase data is useful
809 # even when we don't push, exchanging phase data is useful
809 remotephases = pushop.remote.listkeys('phases')
810 remotephases = pushop.remote.listkeys('phases')
810 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
811 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
811 and remotephases # server supports phases
812 and remotephases # server supports phases
812 and pushop.cgresult is None # nothing was pushed
813 and pushop.cgresult is None # nothing was pushed
813 and remotephases.get('publishing', False)):
814 and remotephases.get('publishing', False)):
814 # When:
815 # When:
815 # - this is a subrepo push
816 # - this is a subrepo push
816 # - and remote support phase
817 # - and remote support phase
817 # - and no changeset was pushed
818 # - and no changeset was pushed
818 # - and remote is publishing
819 # - and remote is publishing
819 # We may be in issue 3871 case!
820 # We may be in issue 3871 case!
820 # We drop the possible phase synchronisation done by
821 # We drop the possible phase synchronisation done by
821 # courtesy to publish changesets possibly locally draft
822 # courtesy to publish changesets possibly locally draft
822 # on the remote.
823 # on the remote.
823 remotephases = {'publishing': 'True'}
824 remotephases = {'publishing': 'True'}
824 if not remotephases: # old server or public only reply from non-publishing
825 if not remotephases: # old server or public only reply from non-publishing
825 _localphasemove(pushop, cheads)
826 _localphasemove(pushop, cheads)
826 # don't push any phase data as there is nothing to push
827 # don't push any phase data as there is nothing to push
827 else:
828 else:
828 ana = phases.analyzeremotephases(pushop.repo, cheads,
829 ana = phases.analyzeremotephases(pushop.repo, cheads,
829 remotephases)
830 remotephases)
830 pheads, droots = ana
831 pheads, droots = ana
831 ### Apply remote phase on local
832 ### Apply remote phase on local
832 if remotephases.get('publishing', False):
833 if remotephases.get('publishing', False):
833 _localphasemove(pushop, cheads)
834 _localphasemove(pushop, cheads)
834 else: # publish = False
835 else: # publish = False
835 _localphasemove(pushop, pheads)
836 _localphasemove(pushop, pheads)
836 _localphasemove(pushop, cheads, phases.draft)
837 _localphasemove(pushop, cheads, phases.draft)
837 ### Apply local phase on remote
838 ### Apply local phase on remote
838
839
839 if pushop.cgresult:
840 if pushop.cgresult:
840 if 'phases' in pushop.stepsdone:
841 if 'phases' in pushop.stepsdone:
841 # phases already pushed though bundle2
842 # phases already pushed though bundle2
842 return
843 return
843 outdated = pushop.outdatedphases
844 outdated = pushop.outdatedphases
844 else:
845 else:
845 outdated = pushop.fallbackoutdatedphases
846 outdated = pushop.fallbackoutdatedphases
846
847
847 pushop.stepsdone.add('phases')
848 pushop.stepsdone.add('phases')
848
849
849 # filter heads already turned public by the push
850 # filter heads already turned public by the push
850 outdated = [c for c in outdated if c.node() not in pheads]
851 outdated = [c for c in outdated if c.node() not in pheads]
851 # fallback to independent pushkey command
852 # fallback to independent pushkey command
852 for newremotehead in outdated:
853 for newremotehead in outdated:
853 r = pushop.remote.pushkey('phases',
854 r = pushop.remote.pushkey('phases',
854 newremotehead.hex(),
855 newremotehead.hex(),
855 str(phases.draft),
856 str(phases.draft),
856 str(phases.public))
857 str(phases.public))
857 if not r:
858 if not r:
858 pushop.ui.warn(_('updating %s to public failed!\n')
859 pushop.ui.warn(_('updating %s to public failed!\n')
859 % newremotehead)
860 % newremotehead)
860
861
861 def _localphasemove(pushop, nodes, phase=phases.public):
862 def _localphasemove(pushop, nodes, phase=phases.public):
862 """move <nodes> to <phase> in the local source repo"""
863 """move <nodes> to <phase> in the local source repo"""
863 if pushop.trmanager:
864 if pushop.trmanager:
864 phases.advanceboundary(pushop.repo,
865 phases.advanceboundary(pushop.repo,
865 pushop.trmanager.transaction(),
866 pushop.trmanager.transaction(),
866 phase,
867 phase,
867 nodes)
868 nodes)
868 else:
869 else:
869 # repo is not locked, do not change any phases!
870 # repo is not locked, do not change any phases!
870 # Informs the user that phases should have been moved when
871 # Informs the user that phases should have been moved when
871 # applicable.
872 # applicable.
872 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
873 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
873 phasestr = phases.phasenames[phase]
874 phasestr = phases.phasenames[phase]
874 if actualmoves:
875 if actualmoves:
875 pushop.ui.status(_('cannot lock source repo, skipping '
876 pushop.ui.status(_('cannot lock source repo, skipping '
876 'local %s phase update\n') % phasestr)
877 'local %s phase update\n') % phasestr)
877
878
878 def _pushobsolete(pushop):
879 def _pushobsolete(pushop):
879 """utility function to push obsolete markers to a remote"""
880 """utility function to push obsolete markers to a remote"""
880 if 'obsmarkers' in pushop.stepsdone:
881 if 'obsmarkers' in pushop.stepsdone:
881 return
882 return
882 repo = pushop.repo
883 repo = pushop.repo
883 remote = pushop.remote
884 remote = pushop.remote
884 pushop.stepsdone.add('obsmarkers')
885 pushop.stepsdone.add('obsmarkers')
885 if pushop.outobsmarkers:
886 if pushop.outobsmarkers:
886 pushop.ui.debug('try to push obsolete markers to remote\n')
887 pushop.ui.debug('try to push obsolete markers to remote\n')
887 rslts = []
888 rslts = []
888 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
889 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
889 for key in sorted(remotedata, reverse=True):
890 for key in sorted(remotedata, reverse=True):
890 # reverse sort to ensure we end with dump0
891 # reverse sort to ensure we end with dump0
891 data = remotedata[key]
892 data = remotedata[key]
892 rslts.append(remote.pushkey('obsolete', key, '', data))
893 rslts.append(remote.pushkey('obsolete', key, '', data))
893 if [r for r in rslts if not r]:
894 if [r for r in rslts if not r]:
894 msg = _('failed to push some obsolete markers!\n')
895 msg = _('failed to push some obsolete markers!\n')
895 repo.ui.warn(msg)
896 repo.ui.warn(msg)
896
897
897 def _pushbookmark(pushop):
898 def _pushbookmark(pushop):
898 """Update bookmark position on remote"""
899 """Update bookmark position on remote"""
899 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
900 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
900 return
901 return
901 pushop.stepsdone.add('bookmarks')
902 pushop.stepsdone.add('bookmarks')
902 ui = pushop.ui
903 ui = pushop.ui
903 remote = pushop.remote
904 remote = pushop.remote
904
905
905 for b, old, new in pushop.outbookmarks:
906 for b, old, new in pushop.outbookmarks:
906 action = 'update'
907 action = 'update'
907 if not old:
908 if not old:
908 action = 'export'
909 action = 'export'
909 elif not new:
910 elif not new:
910 action = 'delete'
911 action = 'delete'
911 if remote.pushkey('bookmarks', b, old, new):
912 if remote.pushkey('bookmarks', b, old, new):
912 ui.status(bookmsgmap[action][0] % b)
913 ui.status(bookmsgmap[action][0] % b)
913 else:
914 else:
914 ui.warn(bookmsgmap[action][1] % b)
915 ui.warn(bookmsgmap[action][1] % b)
915 # discovery can have set the value form invalid entry
916 # discovery can have set the value form invalid entry
916 if pushop.bkresult is not None:
917 if pushop.bkresult is not None:
917 pushop.bkresult = 1
918 pushop.bkresult = 1
918
919
919 class pulloperation(object):
920 class pulloperation(object):
920 """A object that represent a single pull operation
921 """A object that represent a single pull operation
921
922
922 It purpose is to carry pull related state and very common operation.
923 It purpose is to carry pull related state and very common operation.
923
924
924 A new should be created at the beginning of each pull and discarded
925 A new should be created at the beginning of each pull and discarded
925 afterward.
926 afterward.
926 """
927 """
927
928
928 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
929 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
929 remotebookmarks=None, streamclonerequested=None):
930 remotebookmarks=None, streamclonerequested=None):
930 # repo we pull into
931 # repo we pull into
931 self.repo = repo
932 self.repo = repo
932 # repo we pull from
933 # repo we pull from
933 self.remote = remote
934 self.remote = remote
934 # revision we try to pull (None is "all")
935 # revision we try to pull (None is "all")
935 self.heads = heads
936 self.heads = heads
936 # bookmark pulled explicitly
937 # bookmark pulled explicitly
937 self.explicitbookmarks = bookmarks
938 self.explicitbookmarks = bookmarks
938 # do we force pull?
939 # do we force pull?
939 self.force = force
940 self.force = force
940 # whether a streaming clone was requested
941 # whether a streaming clone was requested
941 self.streamclonerequested = streamclonerequested
942 self.streamclonerequested = streamclonerequested
942 # transaction manager
943 # transaction manager
943 self.trmanager = None
944 self.trmanager = None
944 # set of common changeset between local and remote before pull
945 # set of common changeset between local and remote before pull
945 self.common = None
946 self.common = None
946 # set of pulled head
947 # set of pulled head
947 self.rheads = None
948 self.rheads = None
948 # list of missing changeset to fetch remotely
949 # list of missing changeset to fetch remotely
949 self.fetch = None
950 self.fetch = None
950 # remote bookmarks data
951 # remote bookmarks data
951 self.remotebookmarks = remotebookmarks
952 self.remotebookmarks = remotebookmarks
952 # result of changegroup pulling (used as return code by pull)
953 # result of changegroup pulling (used as return code by pull)
953 self.cgresult = None
954 self.cgresult = None
954 # list of step already done
955 # list of step already done
955 self.stepsdone = set()
956 self.stepsdone = set()
956
957
957 @util.propertycache
958 @util.propertycache
958 def pulledsubset(self):
959 def pulledsubset(self):
959 """heads of the set of changeset target by the pull"""
960 """heads of the set of changeset target by the pull"""
960 # compute target subset
961 # compute target subset
961 if self.heads is None:
962 if self.heads is None:
962 # We pulled every thing possible
963 # We pulled every thing possible
963 # sync on everything common
964 # sync on everything common
964 c = set(self.common)
965 c = set(self.common)
965 ret = list(self.common)
966 ret = list(self.common)
966 for n in self.rheads:
967 for n in self.rheads:
967 if n not in c:
968 if n not in c:
968 ret.append(n)
969 ret.append(n)
969 return ret
970 return ret
970 else:
971 else:
971 # We pulled a specific subset
972 # We pulled a specific subset
972 # sync on this subset
973 # sync on this subset
973 return self.heads
974 return self.heads
974
975
975 @util.propertycache
976 @util.propertycache
976 def canusebundle2(self):
977 def canusebundle2(self):
977 return _canusebundle2(self)
978 return _canusebundle2(self)
978
979
979 @util.propertycache
980 @util.propertycache
980 def remotebundle2caps(self):
981 def remotebundle2caps(self):
981 return bundle2.bundle2caps(self.remote)
982 return bundle2.bundle2caps(self.remote)
982
983
983 def gettransaction(self):
984 def gettransaction(self):
984 # deprecated; talk to trmanager directly
985 # deprecated; talk to trmanager directly
985 return self.trmanager.transaction()
986 return self.trmanager.transaction()
986
987
987 class transactionmanager(object):
988 class transactionmanager(object):
988 """An object to manage the life cycle of a transaction
989 """An object to manage the life cycle of a transaction
989
990
990 It creates the transaction on demand and calls the appropriate hooks when
991 It creates the transaction on demand and calls the appropriate hooks when
991 closing the transaction."""
992 closing the transaction."""
992 def __init__(self, repo, source, url):
993 def __init__(self, repo, source, url):
993 self.repo = repo
994 self.repo = repo
994 self.source = source
995 self.source = source
995 self.url = url
996 self.url = url
996 self._tr = None
997 self._tr = None
997
998
998 def transaction(self):
999 def transaction(self):
999 """Return an open transaction object, constructing if necessary"""
1000 """Return an open transaction object, constructing if necessary"""
1000 if not self._tr:
1001 if not self._tr:
1001 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1002 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1002 self._tr = self.repo.transaction(trname)
1003 self._tr = self.repo.transaction(trname)
1003 self._tr.hookargs['source'] = self.source
1004 self._tr.hookargs['source'] = self.source
1004 self._tr.hookargs['url'] = self.url
1005 self._tr.hookargs['url'] = self.url
1005 return self._tr
1006 return self._tr
1006
1007
1007 def close(self):
1008 def close(self):
1008 """close transaction if created"""
1009 """close transaction if created"""
1009 if self._tr is not None:
1010 if self._tr is not None:
1010 self._tr.close()
1011 self._tr.close()
1011
1012
1012 def release(self):
1013 def release(self):
1013 """release transaction if created"""
1014 """release transaction if created"""
1014 if self._tr is not None:
1015 if self._tr is not None:
1015 self._tr.release()
1016 self._tr.release()
1016
1017
1017 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1018 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1018 streamclonerequested=None):
1019 streamclonerequested=None):
1019 """Fetch repository data from a remote.
1020 """Fetch repository data from a remote.
1020
1021
1021 This is the main function used to retrieve data from a remote repository.
1022 This is the main function used to retrieve data from a remote repository.
1022
1023
1023 ``repo`` is the local repository to clone into.
1024 ``repo`` is the local repository to clone into.
1024 ``remote`` is a peer instance.
1025 ``remote`` is a peer instance.
1025 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1026 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1026 default) means to pull everything from the remote.
1027 default) means to pull everything from the remote.
1027 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1028 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1028 default, all remote bookmarks are pulled.
1029 default, all remote bookmarks are pulled.
1029 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1030 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1030 initialization.
1031 initialization.
1031 ``streamclonerequested`` is a boolean indicating whether a "streaming
1032 ``streamclonerequested`` is a boolean indicating whether a "streaming
1032 clone" is requested. A "streaming clone" is essentially a raw file copy
1033 clone" is requested. A "streaming clone" is essentially a raw file copy
1033 of revlogs from the server. This only works when the local repository is
1034 of revlogs from the server. This only works when the local repository is
1034 empty. The default value of ``None`` means to respect the server
1035 empty. The default value of ``None`` means to respect the server
1035 configuration for preferring stream clones.
1036 configuration for preferring stream clones.
1036
1037
1037 Returns the ``pulloperation`` created for this pull.
1038 Returns the ``pulloperation`` created for this pull.
1038 """
1039 """
1039 if opargs is None:
1040 if opargs is None:
1040 opargs = {}
1041 opargs = {}
1041 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1042 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1042 streamclonerequested=streamclonerequested, **opargs)
1043 streamclonerequested=streamclonerequested, **opargs)
1043 if pullop.remote.local():
1044 if pullop.remote.local():
1044 missing = set(pullop.remote.requirements) - pullop.repo.supported
1045 missing = set(pullop.remote.requirements) - pullop.repo.supported
1045 if missing:
1046 if missing:
1046 msg = _("required features are not"
1047 msg = _("required features are not"
1047 " supported in the destination:"
1048 " supported in the destination:"
1048 " %s") % (', '.join(sorted(missing)))
1049 " %s") % (', '.join(sorted(missing)))
1049 raise error.Abort(msg)
1050 raise error.Abort(msg)
1050
1051
1051 lock = pullop.repo.lock()
1052 lock = pullop.repo.lock()
1052 try:
1053 try:
1053 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1054 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1054 streamclone.maybeperformlegacystreamclone(pullop)
1055 streamclone.maybeperformlegacystreamclone(pullop)
1055 # This should ideally be in _pullbundle2(). However, it needs to run
1056 # This should ideally be in _pullbundle2(). However, it needs to run
1056 # before discovery to avoid extra work.
1057 # before discovery to avoid extra work.
1057 _maybeapplyclonebundle(pullop)
1058 _maybeapplyclonebundle(pullop)
1058 _pulldiscovery(pullop)
1059 _pulldiscovery(pullop)
1059 if pullop.canusebundle2:
1060 if pullop.canusebundle2:
1060 _pullbundle2(pullop)
1061 _pullbundle2(pullop)
1061 _pullchangeset(pullop)
1062 _pullchangeset(pullop)
1062 _pullphase(pullop)
1063 _pullphase(pullop)
1063 _pullbookmarks(pullop)
1064 _pullbookmarks(pullop)
1064 _pullobsolete(pullop)
1065 _pullobsolete(pullop)
1065 pullop.trmanager.close()
1066 pullop.trmanager.close()
1066 finally:
1067 finally:
1067 pullop.trmanager.release()
1068 pullop.trmanager.release()
1068 lock.release()
1069 lock.release()
1069
1070
1070 return pullop
1071 return pullop
1071
1072
1072 # list of steps to perform discovery before pull
1073 # list of steps to perform discovery before pull
1073 pulldiscoveryorder = []
1074 pulldiscoveryorder = []
1074
1075
1075 # Mapping between step name and function
1076 # Mapping between step name and function
1076 #
1077 #
1077 # This exists to help extensions wrap steps if necessary
1078 # This exists to help extensions wrap steps if necessary
1078 pulldiscoverymapping = {}
1079 pulldiscoverymapping = {}
1079
1080
1080 def pulldiscovery(stepname):
1081 def pulldiscovery(stepname):
1081 """decorator for function performing discovery before pull
1082 """decorator for function performing discovery before pull
1082
1083
1083 The function is added to the step -> function mapping and appended to the
1084 The function is added to the step -> function mapping and appended to the
1084 list of steps. Beware that decorated function will be added in order (this
1085 list of steps. Beware that decorated function will be added in order (this
1085 may matter).
1086 may matter).
1086
1087
1087 You can only use this decorator for a new step, if you want to wrap a step
1088 You can only use this decorator for a new step, if you want to wrap a step
1088 from an extension, change the pulldiscovery dictionary directly."""
1089 from an extension, change the pulldiscovery dictionary directly."""
1089 def dec(func):
1090 def dec(func):
1090 assert stepname not in pulldiscoverymapping
1091 assert stepname not in pulldiscoverymapping
1091 pulldiscoverymapping[stepname] = func
1092 pulldiscoverymapping[stepname] = func
1092 pulldiscoveryorder.append(stepname)
1093 pulldiscoveryorder.append(stepname)
1093 return func
1094 return func
1094 return dec
1095 return dec
1095
1096
1096 def _pulldiscovery(pullop):
1097 def _pulldiscovery(pullop):
1097 """Run all discovery steps"""
1098 """Run all discovery steps"""
1098 for stepname in pulldiscoveryorder:
1099 for stepname in pulldiscoveryorder:
1099 step = pulldiscoverymapping[stepname]
1100 step = pulldiscoverymapping[stepname]
1100 step(pullop)
1101 step(pullop)
1101
1102
1102 @pulldiscovery('b1:bookmarks')
1103 @pulldiscovery('b1:bookmarks')
1103 def _pullbookmarkbundle1(pullop):
1104 def _pullbookmarkbundle1(pullop):
1104 """fetch bookmark data in bundle1 case
1105 """fetch bookmark data in bundle1 case
1105
1106
1106 If not using bundle2, we have to fetch bookmarks before changeset
1107 If not using bundle2, we have to fetch bookmarks before changeset
1107 discovery to reduce the chance and impact of race conditions."""
1108 discovery to reduce the chance and impact of race conditions."""
1108 if pullop.remotebookmarks is not None:
1109 if pullop.remotebookmarks is not None:
1109 return
1110 return
1110 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1111 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1111 # all known bundle2 servers now support listkeys, but lets be nice with
1112 # all known bundle2 servers now support listkeys, but lets be nice with
1112 # new implementation.
1113 # new implementation.
1113 return
1114 return
1114 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1115 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1115
1116
1116
1117
1117 @pulldiscovery('changegroup')
1118 @pulldiscovery('changegroup')
1118 def _pulldiscoverychangegroup(pullop):
1119 def _pulldiscoverychangegroup(pullop):
1119 """discovery phase for the pull
1120 """discovery phase for the pull
1120
1121
1121 Current handle changeset discovery only, will change handle all discovery
1122 Current handle changeset discovery only, will change handle all discovery
1122 at some point."""
1123 at some point."""
1123 tmp = discovery.findcommonincoming(pullop.repo,
1124 tmp = discovery.findcommonincoming(pullop.repo,
1124 pullop.remote,
1125 pullop.remote,
1125 heads=pullop.heads,
1126 heads=pullop.heads,
1126 force=pullop.force)
1127 force=pullop.force)
1127 common, fetch, rheads = tmp
1128 common, fetch, rheads = tmp
1128 nm = pullop.repo.unfiltered().changelog.nodemap
1129 nm = pullop.repo.unfiltered().changelog.nodemap
1129 if fetch and rheads:
1130 if fetch and rheads:
1130 # If a remote heads in filtered locally, lets drop it from the unknown
1131 # If a remote heads in filtered locally, lets drop it from the unknown
1131 # remote heads and put in back in common.
1132 # remote heads and put in back in common.
1132 #
1133 #
1133 # This is a hackish solution to catch most of "common but locally
1134 # This is a hackish solution to catch most of "common but locally
1134 # hidden situation". We do not performs discovery on unfiltered
1135 # hidden situation". We do not performs discovery on unfiltered
1135 # repository because it end up doing a pathological amount of round
1136 # repository because it end up doing a pathological amount of round
1136 # trip for w huge amount of changeset we do not care about.
1137 # trip for w huge amount of changeset we do not care about.
1137 #
1138 #
1138 # If a set of such "common but filtered" changeset exist on the server
1139 # If a set of such "common but filtered" changeset exist on the server
1139 # but are not including a remote heads, we'll not be able to detect it,
1140 # but are not including a remote heads, we'll not be able to detect it,
1140 scommon = set(common)
1141 scommon = set(common)
1141 filteredrheads = []
1142 filteredrheads = []
1142 for n in rheads:
1143 for n in rheads:
1143 if n in nm:
1144 if n in nm:
1144 if n not in scommon:
1145 if n not in scommon:
1145 common.append(n)
1146 common.append(n)
1146 else:
1147 else:
1147 filteredrheads.append(n)
1148 filteredrheads.append(n)
1148 if not filteredrheads:
1149 if not filteredrheads:
1149 fetch = []
1150 fetch = []
1150 rheads = filteredrheads
1151 rheads = filteredrheads
1151 pullop.common = common
1152 pullop.common = common
1152 pullop.fetch = fetch
1153 pullop.fetch = fetch
1153 pullop.rheads = rheads
1154 pullop.rheads = rheads
1154
1155
1155 def _pullbundle2(pullop):
1156 def _pullbundle2(pullop):
1156 """pull data using bundle2
1157 """pull data using bundle2
1157
1158
1158 For now, the only supported data are changegroup."""
1159 For now, the only supported data are changegroup."""
1159 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1160 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1160
1161
1161 streaming, streamreqs = streamclone.canperformstreamclone(pullop)
1162 streaming, streamreqs = streamclone.canperformstreamclone(pullop)
1162
1163
1163 # pulling changegroup
1164 # pulling changegroup
1164 pullop.stepsdone.add('changegroup')
1165 pullop.stepsdone.add('changegroup')
1165
1166
1166 kwargs['common'] = pullop.common
1167 kwargs['common'] = pullop.common
1167 kwargs['heads'] = pullop.heads or pullop.rheads
1168 kwargs['heads'] = pullop.heads or pullop.rheads
1168 kwargs['cg'] = pullop.fetch
1169 kwargs['cg'] = pullop.fetch
1169 if 'listkeys' in pullop.remotebundle2caps:
1170 if 'listkeys' in pullop.remotebundle2caps:
1170 kwargs['listkeys'] = ['phase']
1171 kwargs['listkeys'] = ['phase']
1171 if pullop.remotebookmarks is None:
1172 if pullop.remotebookmarks is None:
1172 # make sure to always includes bookmark data when migrating
1173 # make sure to always includes bookmark data when migrating
1173 # `hg incoming --bundle` to using this function.
1174 # `hg incoming --bundle` to using this function.
1174 kwargs['listkeys'].append('bookmarks')
1175 kwargs['listkeys'].append('bookmarks')
1175 if streaming:
1176 if streaming:
1176 pullop.repo.ui.status(_('streaming all changes\n'))
1177 pullop.repo.ui.status(_('streaming all changes\n'))
1177 elif not pullop.fetch:
1178 elif not pullop.fetch:
1178 pullop.repo.ui.status(_("no changes found\n"))
1179 pullop.repo.ui.status(_("no changes found\n"))
1179 pullop.cgresult = 0
1180 pullop.cgresult = 0
1180 else:
1181 else:
1181 if pullop.heads is None and list(pullop.common) == [nullid]:
1182 if pullop.heads is None and list(pullop.common) == [nullid]:
1182 pullop.repo.ui.status(_("requesting all changes\n"))
1183 pullop.repo.ui.status(_("requesting all changes\n"))
1183 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1184 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1184 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1185 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1185 if obsolete.commonversion(remoteversions) is not None:
1186 if obsolete.commonversion(remoteversions) is not None:
1186 kwargs['obsmarkers'] = True
1187 kwargs['obsmarkers'] = True
1187 pullop.stepsdone.add('obsmarkers')
1188 pullop.stepsdone.add('obsmarkers')
1188 _pullbundle2extraprepare(pullop, kwargs)
1189 _pullbundle2extraprepare(pullop, kwargs)
1189 bundle = pullop.remote.getbundle('pull', **kwargs)
1190 bundle = pullop.remote.getbundle('pull', **kwargs)
1190 try:
1191 try:
1191 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1192 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1192 except error.BundleValueError as exc:
1193 except error.BundleValueError as exc:
1193 raise error.Abort('missing support for %s' % exc)
1194 raise error.Abort('missing support for %s' % exc)
1194
1195
1195 if pullop.fetch:
1196 if pullop.fetch:
1196 results = [cg['return'] for cg in op.records['changegroup']]
1197 results = [cg['return'] for cg in op.records['changegroup']]
1197 pullop.cgresult = changegroup.combineresults(results)
1198 pullop.cgresult = changegroup.combineresults(results)
1198
1199
1199 # processing phases change
1200 # processing phases change
1200 for namespace, value in op.records['listkeys']:
1201 for namespace, value in op.records['listkeys']:
1201 if namespace == 'phases':
1202 if namespace == 'phases':
1202 _pullapplyphases(pullop, value)
1203 _pullapplyphases(pullop, value)
1203
1204
1204 # processing bookmark update
1205 # processing bookmark update
1205 for namespace, value in op.records['listkeys']:
1206 for namespace, value in op.records['listkeys']:
1206 if namespace == 'bookmarks':
1207 if namespace == 'bookmarks':
1207 pullop.remotebookmarks = value
1208 pullop.remotebookmarks = value
1208
1209
1209 # bookmark data were either already there or pulled in the bundle
1210 # bookmark data were either already there or pulled in the bundle
1210 if pullop.remotebookmarks is not None:
1211 if pullop.remotebookmarks is not None:
1211 _pullbookmarks(pullop)
1212 _pullbookmarks(pullop)
1212
1213
1213 def _pullbundle2extraprepare(pullop, kwargs):
1214 def _pullbundle2extraprepare(pullop, kwargs):
1214 """hook function so that extensions can extend the getbundle call"""
1215 """hook function so that extensions can extend the getbundle call"""
1215 pass
1216 pass
1216
1217
1217 def _pullchangeset(pullop):
1218 def _pullchangeset(pullop):
1218 """pull changeset from unbundle into the local repo"""
1219 """pull changeset from unbundle into the local repo"""
1219 # We delay the open of the transaction as late as possible so we
1220 # We delay the open of the transaction as late as possible so we
1220 # don't open transaction for nothing or you break future useful
1221 # don't open transaction for nothing or you break future useful
1221 # rollback call
1222 # rollback call
1222 if 'changegroup' in pullop.stepsdone:
1223 if 'changegroup' in pullop.stepsdone:
1223 return
1224 return
1224 pullop.stepsdone.add('changegroup')
1225 pullop.stepsdone.add('changegroup')
1225 if not pullop.fetch:
1226 if not pullop.fetch:
1226 pullop.repo.ui.status(_("no changes found\n"))
1227 pullop.repo.ui.status(_("no changes found\n"))
1227 pullop.cgresult = 0
1228 pullop.cgresult = 0
1228 return
1229 return
1229 pullop.gettransaction()
1230 pullop.gettransaction()
1230 if pullop.heads is None and list(pullop.common) == [nullid]:
1231 if pullop.heads is None and list(pullop.common) == [nullid]:
1231 pullop.repo.ui.status(_("requesting all changes\n"))
1232 pullop.repo.ui.status(_("requesting all changes\n"))
1232 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1233 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1233 # issue1320, avoid a race if remote changed after discovery
1234 # issue1320, avoid a race if remote changed after discovery
1234 pullop.heads = pullop.rheads
1235 pullop.heads = pullop.rheads
1235
1236
1236 if pullop.remote.capable('getbundle'):
1237 if pullop.remote.capable('getbundle'):
1237 # TODO: get bundlecaps from remote
1238 # TODO: get bundlecaps from remote
1238 cg = pullop.remote.getbundle('pull', common=pullop.common,
1239 cg = pullop.remote.getbundle('pull', common=pullop.common,
1239 heads=pullop.heads or pullop.rheads)
1240 heads=pullop.heads or pullop.rheads)
1240 elif pullop.heads is None:
1241 elif pullop.heads is None:
1241 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1242 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1242 elif not pullop.remote.capable('changegroupsubset'):
1243 elif not pullop.remote.capable('changegroupsubset'):
1243 raise error.Abort(_("partial pull cannot be done because "
1244 raise error.Abort(_("partial pull cannot be done because "
1244 "other repository doesn't support "
1245 "other repository doesn't support "
1245 "changegroupsubset."))
1246 "changegroupsubset."))
1246 else:
1247 else:
1247 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1248 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1248 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1249 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1249 pullop.remote.url())
1250 pullop.remote.url())
1250
1251
1251 def _pullphase(pullop):
1252 def _pullphase(pullop):
1252 # Get remote phases data from remote
1253 # Get remote phases data from remote
1253 if 'phases' in pullop.stepsdone:
1254 if 'phases' in pullop.stepsdone:
1254 return
1255 return
1255 remotephases = pullop.remote.listkeys('phases')
1256 remotephases = pullop.remote.listkeys('phases')
1256 _pullapplyphases(pullop, remotephases)
1257 _pullapplyphases(pullop, remotephases)
1257
1258
1258 def _pullapplyphases(pullop, remotephases):
1259 def _pullapplyphases(pullop, remotephases):
1259 """apply phase movement from observed remote state"""
1260 """apply phase movement from observed remote state"""
1260 if 'phases' in pullop.stepsdone:
1261 if 'phases' in pullop.stepsdone:
1261 return
1262 return
1262 pullop.stepsdone.add('phases')
1263 pullop.stepsdone.add('phases')
1263 publishing = bool(remotephases.get('publishing', False))
1264 publishing = bool(remotephases.get('publishing', False))
1264 if remotephases and not publishing:
1265 if remotephases and not publishing:
1265 # remote is new and unpublishing
1266 # remote is new and unpublishing
1266 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1267 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1267 pullop.pulledsubset,
1268 pullop.pulledsubset,
1268 remotephases)
1269 remotephases)
1269 dheads = pullop.pulledsubset
1270 dheads = pullop.pulledsubset
1270 else:
1271 else:
1271 # Remote is old or publishing all common changesets
1272 # Remote is old or publishing all common changesets
1272 # should be seen as public
1273 # should be seen as public
1273 pheads = pullop.pulledsubset
1274 pheads = pullop.pulledsubset
1274 dheads = []
1275 dheads = []
1275 unfi = pullop.repo.unfiltered()
1276 unfi = pullop.repo.unfiltered()
1276 phase = unfi._phasecache.phase
1277 phase = unfi._phasecache.phase
1277 rev = unfi.changelog.nodemap.get
1278 rev = unfi.changelog.nodemap.get
1278 public = phases.public
1279 public = phases.public
1279 draft = phases.draft
1280 draft = phases.draft
1280
1281
1281 # exclude changesets already public locally and update the others
1282 # exclude changesets already public locally and update the others
1282 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1283 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1283 if pheads:
1284 if pheads:
1284 tr = pullop.gettransaction()
1285 tr = pullop.gettransaction()
1285 phases.advanceboundary(pullop.repo, tr, public, pheads)
1286 phases.advanceboundary(pullop.repo, tr, public, pheads)
1286
1287
1287 # exclude changesets already draft locally and update the others
1288 # exclude changesets already draft locally and update the others
1288 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1289 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1289 if dheads:
1290 if dheads:
1290 tr = pullop.gettransaction()
1291 tr = pullop.gettransaction()
1291 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1292 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1292
1293
1293 def _pullbookmarks(pullop):
1294 def _pullbookmarks(pullop):
1294 """process the remote bookmark information to update the local one"""
1295 """process the remote bookmark information to update the local one"""
1295 if 'bookmarks' in pullop.stepsdone:
1296 if 'bookmarks' in pullop.stepsdone:
1296 return
1297 return
1297 pullop.stepsdone.add('bookmarks')
1298 pullop.stepsdone.add('bookmarks')
1298 repo = pullop.repo
1299 repo = pullop.repo
1299 remotebookmarks = pullop.remotebookmarks
1300 remotebookmarks = pullop.remotebookmarks
1300 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1301 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1301 pullop.remote.url(),
1302 pullop.remote.url(),
1302 pullop.gettransaction,
1303 pullop.gettransaction,
1303 explicit=pullop.explicitbookmarks)
1304 explicit=pullop.explicitbookmarks)
1304
1305
1305 def _pullobsolete(pullop):
1306 def _pullobsolete(pullop):
1306 """utility function to pull obsolete markers from a remote
1307 """utility function to pull obsolete markers from a remote
1307
1308
1308 The `gettransaction` is function that return the pull transaction, creating
1309 The `gettransaction` is function that return the pull transaction, creating
1309 one if necessary. We return the transaction to inform the calling code that
1310 one if necessary. We return the transaction to inform the calling code that
1310 a new transaction have been created (when applicable).
1311 a new transaction have been created (when applicable).
1311
1312
1312 Exists mostly to allow overriding for experimentation purpose"""
1313 Exists mostly to allow overriding for experimentation purpose"""
1313 if 'obsmarkers' in pullop.stepsdone:
1314 if 'obsmarkers' in pullop.stepsdone:
1314 return
1315 return
1315 pullop.stepsdone.add('obsmarkers')
1316 pullop.stepsdone.add('obsmarkers')
1316 tr = None
1317 tr = None
1317 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1318 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1318 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1319 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1319 remoteobs = pullop.remote.listkeys('obsolete')
1320 remoteobs = pullop.remote.listkeys('obsolete')
1320 if 'dump0' in remoteobs:
1321 if 'dump0' in remoteobs:
1321 tr = pullop.gettransaction()
1322 tr = pullop.gettransaction()
1322 for key in sorted(remoteobs, reverse=True):
1323 for key in sorted(remoteobs, reverse=True):
1323 if key.startswith('dump'):
1324 if key.startswith('dump'):
1324 data = base85.b85decode(remoteobs[key])
1325 data = base85.b85decode(remoteobs[key])
1325 pullop.repo.obsstore.mergemarkers(tr, data)
1326 pullop.repo.obsstore.mergemarkers(tr, data)
1326 pullop.repo.invalidatevolatilesets()
1327 pullop.repo.invalidatevolatilesets()
1327 return tr
1328 return tr
1328
1329
1329 def caps20to10(repo):
1330 def caps20to10(repo):
1330 """return a set with appropriate options to use bundle20 during getbundle"""
1331 """return a set with appropriate options to use bundle20 during getbundle"""
1331 caps = set(['HG20'])
1332 caps = set(['HG20'])
1332 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1333 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1333 caps.add('bundle2=' + urllib.quote(capsblob))
1334 caps.add('bundle2=' + urllib.quote(capsblob))
1334 return caps
1335 return caps
1335
1336
1336 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1337 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1337 getbundle2partsorder = []
1338 getbundle2partsorder = []
1338
1339
1339 # Mapping between step name and function
1340 # Mapping between step name and function
1340 #
1341 #
1341 # This exists to help extensions wrap steps if necessary
1342 # This exists to help extensions wrap steps if necessary
1342 getbundle2partsmapping = {}
1343 getbundle2partsmapping = {}
1343
1344
1344 def getbundle2partsgenerator(stepname, idx=None):
1345 def getbundle2partsgenerator(stepname, idx=None):
1345 """decorator for function generating bundle2 part for getbundle
1346 """decorator for function generating bundle2 part for getbundle
1346
1347
1347 The function is added to the step -> function mapping and appended to the
1348 The function is added to the step -> function mapping and appended to the
1348 list of steps. Beware that decorated functions will be added in order
1349 list of steps. Beware that decorated functions will be added in order
1349 (this may matter).
1350 (this may matter).
1350
1351
1351 You can only use this decorator for new steps, if you want to wrap a step
1352 You can only use this decorator for new steps, if you want to wrap a step
1352 from an extension, attack the getbundle2partsmapping dictionary directly."""
1353 from an extension, attack the getbundle2partsmapping dictionary directly."""
1353 def dec(func):
1354 def dec(func):
1354 assert stepname not in getbundle2partsmapping
1355 assert stepname not in getbundle2partsmapping
1355 getbundle2partsmapping[stepname] = func
1356 getbundle2partsmapping[stepname] = func
1356 if idx is None:
1357 if idx is None:
1357 getbundle2partsorder.append(stepname)
1358 getbundle2partsorder.append(stepname)
1358 else:
1359 else:
1359 getbundle2partsorder.insert(idx, stepname)
1360 getbundle2partsorder.insert(idx, stepname)
1360 return func
1361 return func
1361 return dec
1362 return dec
1362
1363
1363 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1364 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1364 **kwargs):
1365 **kwargs):
1365 """return a full bundle (with potentially multiple kind of parts)
1366 """return a full bundle (with potentially multiple kind of parts)
1366
1367
1367 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1368 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1368 passed. For now, the bundle can contain only changegroup, but this will
1369 passed. For now, the bundle can contain only changegroup, but this will
1369 changes when more part type will be available for bundle2.
1370 changes when more part type will be available for bundle2.
1370
1371
1371 This is different from changegroup.getchangegroup that only returns an HG10
1372 This is different from changegroup.getchangegroup that only returns an HG10
1372 changegroup bundle. They may eventually get reunited in the future when we
1373 changegroup bundle. They may eventually get reunited in the future when we
1373 have a clearer idea of the API we what to query different data.
1374 have a clearer idea of the API we what to query different data.
1374
1375
1375 The implementation is at a very early stage and will get massive rework
1376 The implementation is at a very early stage and will get massive rework
1376 when the API of bundle is refined.
1377 when the API of bundle is refined.
1377 """
1378 """
1378 # bundle10 case
1379 # bundle10 case
1379 usebundle2 = False
1380 usebundle2 = False
1380 if bundlecaps is not None:
1381 if bundlecaps is not None:
1381 usebundle2 = any((cap.startswith('HG2') for cap in bundlecaps))
1382 usebundle2 = any((cap.startswith('HG2') for cap in bundlecaps))
1382 if not usebundle2:
1383 if not usebundle2:
1383 if bundlecaps and not kwargs.get('cg', True):
1384 if bundlecaps and not kwargs.get('cg', True):
1384 raise ValueError(_('request for bundle10 must include changegroup'))
1385 raise ValueError(_('request for bundle10 must include changegroup'))
1385
1386
1386 if kwargs:
1387 if kwargs:
1387 raise ValueError(_('unsupported getbundle arguments: %s')
1388 raise ValueError(_('unsupported getbundle arguments: %s')
1388 % ', '.join(sorted(kwargs.keys())))
1389 % ', '.join(sorted(kwargs.keys())))
1389 return changegroup.getchangegroup(repo, source, heads=heads,
1390 return changegroup.getchangegroup(repo, source, heads=heads,
1390 common=common, bundlecaps=bundlecaps)
1391 common=common, bundlecaps=bundlecaps)
1391
1392
1392 # bundle20 case
1393 # bundle20 case
1393 b2caps = {}
1394 b2caps = {}
1394 for bcaps in bundlecaps:
1395 for bcaps in bundlecaps:
1395 if bcaps.startswith('bundle2='):
1396 if bcaps.startswith('bundle2='):
1396 blob = urllib.unquote(bcaps[len('bundle2='):])
1397 blob = urllib.unquote(bcaps[len('bundle2='):])
1397 b2caps.update(bundle2.decodecaps(blob))
1398 b2caps.update(bundle2.decodecaps(blob))
1398 bundler = bundle2.bundle20(repo.ui, b2caps)
1399 bundler = bundle2.bundle20(repo.ui, b2caps)
1399
1400
1400 kwargs['heads'] = heads
1401 kwargs['heads'] = heads
1401 kwargs['common'] = common
1402 kwargs['common'] = common
1402
1403
1403 for name in getbundle2partsorder:
1404 for name in getbundle2partsorder:
1404 func = getbundle2partsmapping[name]
1405 func = getbundle2partsmapping[name]
1405 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1406 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1406 **kwargs)
1407 **kwargs)
1407
1408
1408 return util.chunkbuffer(bundler.getchunks())
1409 return util.chunkbuffer(bundler.getchunks())
1409
1410
1410 @getbundle2partsgenerator('changegroup')
1411 @getbundle2partsgenerator('changegroup')
1411 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1412 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1412 b2caps=None, heads=None, common=None, **kwargs):
1413 b2caps=None, heads=None, common=None, **kwargs):
1413 """add a changegroup part to the requested bundle"""
1414 """add a changegroup part to the requested bundle"""
1414 cg = None
1415 cg = None
1415 if kwargs.get('cg', True):
1416 if kwargs.get('cg', True):
1416 # build changegroup bundle here.
1417 # build changegroup bundle here.
1417 version = None
1418 version = None
1418 cgversions = b2caps.get('changegroup')
1419 cgversions = b2caps.get('changegroup')
1419 getcgkwargs = {}
1420 getcgkwargs = {}
1420 if cgversions: # 3.1 and 3.2 ship with an empty value
1421 if cgversions: # 3.1 and 3.2 ship with an empty value
1421 cgversions = [v for v in cgversions if v in changegroup.packermap]
1422 cgversions = [v for v in cgversions if v in changegroup.packermap]
1422 if not cgversions:
1423 if not cgversions:
1423 raise ValueError(_('no common changegroup version'))
1424 raise ValueError(_('no common changegroup version'))
1424 version = getcgkwargs['version'] = max(cgversions)
1425 version = getcgkwargs['version'] = max(cgversions)
1425 outgoing = changegroup.computeoutgoing(repo, heads, common)
1426 outgoing = changegroup.computeoutgoing(repo, heads, common)
1426 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1427 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1427 bundlecaps=bundlecaps,
1428 bundlecaps=bundlecaps,
1428 **getcgkwargs)
1429 **getcgkwargs)
1429
1430
1430 if cg:
1431 if cg:
1431 part = bundler.newpart('changegroup', data=cg)
1432 part = bundler.newpart('changegroup', data=cg)
1432 if version is not None:
1433 if version is not None:
1433 part.addparam('version', version)
1434 part.addparam('version', version)
1434 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1435 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1435
1436
1436 @getbundle2partsgenerator('listkeys')
1437 @getbundle2partsgenerator('listkeys')
1437 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1438 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1438 b2caps=None, **kwargs):
1439 b2caps=None, **kwargs):
1439 """add parts containing listkeys namespaces to the requested bundle"""
1440 """add parts containing listkeys namespaces to the requested bundle"""
1440 listkeys = kwargs.get('listkeys', ())
1441 listkeys = kwargs.get('listkeys', ())
1441 for namespace in listkeys:
1442 for namespace in listkeys:
1442 part = bundler.newpart('listkeys')
1443 part = bundler.newpart('listkeys')
1443 part.addparam('namespace', namespace)
1444 part.addparam('namespace', namespace)
1444 keys = repo.listkeys(namespace).items()
1445 keys = repo.listkeys(namespace).items()
1445 part.data = pushkey.encodekeys(keys)
1446 part.data = pushkey.encodekeys(keys)
1446
1447
1447 @getbundle2partsgenerator('obsmarkers')
1448 @getbundle2partsgenerator('obsmarkers')
1448 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1449 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1449 b2caps=None, heads=None, **kwargs):
1450 b2caps=None, heads=None, **kwargs):
1450 """add an obsolescence markers part to the requested bundle"""
1451 """add an obsolescence markers part to the requested bundle"""
1451 if kwargs.get('obsmarkers', False):
1452 if kwargs.get('obsmarkers', False):
1452 if heads is None:
1453 if heads is None:
1453 heads = repo.heads()
1454 heads = repo.heads()
1454 subset = [c.node() for c in repo.set('::%ln', heads)]
1455 subset = [c.node() for c in repo.set('::%ln', heads)]
1455 markers = repo.obsstore.relevantmarkers(subset)
1456 markers = repo.obsstore.relevantmarkers(subset)
1456 markers = sorted(markers)
1457 markers = sorted(markers)
1457 buildobsmarkerspart(bundler, markers)
1458 buildobsmarkerspart(bundler, markers)
1458
1459
1459 @getbundle2partsgenerator('hgtagsfnodes')
1460 @getbundle2partsgenerator('hgtagsfnodes')
1460 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1461 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1461 b2caps=None, heads=None, common=None,
1462 b2caps=None, heads=None, common=None,
1462 **kwargs):
1463 **kwargs):
1463 """Transfer the .hgtags filenodes mapping.
1464 """Transfer the .hgtags filenodes mapping.
1464
1465
1465 Only values for heads in this bundle will be transferred.
1466 Only values for heads in this bundle will be transferred.
1466
1467
1467 The part data consists of pairs of 20 byte changeset node and .hgtags
1468 The part data consists of pairs of 20 byte changeset node and .hgtags
1468 filenodes raw values.
1469 filenodes raw values.
1469 """
1470 """
1470 # Don't send unless:
1471 # Don't send unless:
1471 # - changeset are being exchanged,
1472 # - changeset are being exchanged,
1472 # - the client supports it.
1473 # - the client supports it.
1473 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1474 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1474 return
1475 return
1475
1476
1476 outgoing = changegroup.computeoutgoing(repo, heads, common)
1477 outgoing = changegroup.computeoutgoing(repo, heads, common)
1477
1478
1478 if not outgoing.missingheads:
1479 if not outgoing.missingheads:
1479 return
1480 return
1480
1481
1481 cache = tags.hgtagsfnodescache(repo.unfiltered())
1482 cache = tags.hgtagsfnodescache(repo.unfiltered())
1482 chunks = []
1483 chunks = []
1483
1484
1484 # .hgtags fnodes are only relevant for head changesets. While we could
1485 # .hgtags fnodes are only relevant for head changesets. While we could
1485 # transfer values for all known nodes, there will likely be little to
1486 # transfer values for all known nodes, there will likely be little to
1486 # no benefit.
1487 # no benefit.
1487 #
1488 #
1488 # We don't bother using a generator to produce output data because
1489 # We don't bother using a generator to produce output data because
1489 # a) we only have 40 bytes per head and even esoteric numbers of heads
1490 # a) we only have 40 bytes per head and even esoteric numbers of heads
1490 # consume little memory (1M heads is 40MB) b) we don't want to send the
1491 # consume little memory (1M heads is 40MB) b) we don't want to send the
1491 # part if we don't have entries and knowing if we have entries requires
1492 # part if we don't have entries and knowing if we have entries requires
1492 # cache lookups.
1493 # cache lookups.
1493 for node in outgoing.missingheads:
1494 for node in outgoing.missingheads:
1494 # Don't compute missing, as this may slow down serving.
1495 # Don't compute missing, as this may slow down serving.
1495 fnode = cache.getfnode(node, computemissing=False)
1496 fnode = cache.getfnode(node, computemissing=False)
1496 if fnode is not None:
1497 if fnode is not None:
1497 chunks.extend([node, fnode])
1498 chunks.extend([node, fnode])
1498
1499
1499 if chunks:
1500 if chunks:
1500 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1501 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1501
1502
1502 def check_heads(repo, their_heads, context):
1503 def check_heads(repo, their_heads, context):
1503 """check if the heads of a repo have been modified
1504 """check if the heads of a repo have been modified
1504
1505
1505 Used by peer for unbundling.
1506 Used by peer for unbundling.
1506 """
1507 """
1507 heads = repo.heads()
1508 heads = repo.heads()
1508 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1509 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1509 if not (their_heads == ['force'] or their_heads == heads or
1510 if not (their_heads == ['force'] or their_heads == heads or
1510 their_heads == ['hashed', heads_hash]):
1511 their_heads == ['hashed', heads_hash]):
1511 # someone else committed/pushed/unbundled while we
1512 # someone else committed/pushed/unbundled while we
1512 # were transferring data
1513 # were transferring data
1513 raise error.PushRaced('repository changed while %s - '
1514 raise error.PushRaced('repository changed while %s - '
1514 'please try again' % context)
1515 'please try again' % context)
1515
1516
1516 def unbundle(repo, cg, heads, source, url):
1517 def unbundle(repo, cg, heads, source, url):
1517 """Apply a bundle to a repo.
1518 """Apply a bundle to a repo.
1518
1519
1519 this function makes sure the repo is locked during the application and have
1520 this function makes sure the repo is locked during the application and have
1520 mechanism to check that no push race occurred between the creation of the
1521 mechanism to check that no push race occurred between the creation of the
1521 bundle and its application.
1522 bundle and its application.
1522
1523
1523 If the push was raced as PushRaced exception is raised."""
1524 If the push was raced as PushRaced exception is raised."""
1524 r = 0
1525 r = 0
1525 # need a transaction when processing a bundle2 stream
1526 # need a transaction when processing a bundle2 stream
1526 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1527 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1527 lockandtr = [None, None, None]
1528 lockandtr = [None, None, None]
1528 recordout = None
1529 recordout = None
1529 # quick fix for output mismatch with bundle2 in 3.4
1530 # quick fix for output mismatch with bundle2 in 3.4
1530 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1531 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1531 False)
1532 False)
1532 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1533 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1533 captureoutput = True
1534 captureoutput = True
1534 try:
1535 try:
1535 check_heads(repo, heads, 'uploading changes')
1536 check_heads(repo, heads, 'uploading changes')
1536 # push can proceed
1537 # push can proceed
1537 if util.safehasattr(cg, 'params'):
1538 if util.safehasattr(cg, 'params'):
1538 r = None
1539 r = None
1539 try:
1540 try:
1540 def gettransaction():
1541 def gettransaction():
1541 if not lockandtr[2]:
1542 if not lockandtr[2]:
1542 lockandtr[0] = repo.wlock()
1543 lockandtr[0] = repo.wlock()
1543 lockandtr[1] = repo.lock()
1544 lockandtr[1] = repo.lock()
1544 lockandtr[2] = repo.transaction(source)
1545 lockandtr[2] = repo.transaction(source)
1545 lockandtr[2].hookargs['source'] = source
1546 lockandtr[2].hookargs['source'] = source
1546 lockandtr[2].hookargs['url'] = url
1547 lockandtr[2].hookargs['url'] = url
1547 lockandtr[2].hookargs['bundle2'] = '1'
1548 lockandtr[2].hookargs['bundle2'] = '1'
1548 return lockandtr[2]
1549 return lockandtr[2]
1549
1550
1550 # Do greedy locking by default until we're satisfied with lazy
1551 # Do greedy locking by default until we're satisfied with lazy
1551 # locking.
1552 # locking.
1552 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1553 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1553 gettransaction()
1554 gettransaction()
1554
1555
1555 op = bundle2.bundleoperation(repo, gettransaction,
1556 op = bundle2.bundleoperation(repo, gettransaction,
1556 captureoutput=captureoutput)
1557 captureoutput=captureoutput)
1557 try:
1558 try:
1558 op = bundle2.processbundle(repo, cg, op=op)
1559 op = bundle2.processbundle(repo, cg, op=op)
1559 finally:
1560 finally:
1560 r = op.reply
1561 r = op.reply
1561 if captureoutput and r is not None:
1562 if captureoutput and r is not None:
1562 repo.ui.pushbuffer(error=True, subproc=True)
1563 repo.ui.pushbuffer(error=True, subproc=True)
1563 def recordout(output):
1564 def recordout(output):
1564 r.newpart('output', data=output, mandatory=False)
1565 r.newpart('output', data=output, mandatory=False)
1565 if lockandtr[2] is not None:
1566 if lockandtr[2] is not None:
1566 lockandtr[2].close()
1567 lockandtr[2].close()
1567 except BaseException as exc:
1568 except BaseException as exc:
1568 exc.duringunbundle2 = True
1569 exc.duringunbundle2 = True
1569 if captureoutput and r is not None:
1570 if captureoutput and r is not None:
1570 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1571 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1571 def recordout(output):
1572 def recordout(output):
1572 part = bundle2.bundlepart('output', data=output,
1573 part = bundle2.bundlepart('output', data=output,
1573 mandatory=False)
1574 mandatory=False)
1574 parts.append(part)
1575 parts.append(part)
1575 raise
1576 raise
1576 else:
1577 else:
1577 lockandtr[1] = repo.lock()
1578 lockandtr[1] = repo.lock()
1578 r = changegroup.addchangegroup(repo, cg, source, url)
1579 r = changegroup.addchangegroup(repo, cg, source, url)
1579 finally:
1580 finally:
1580 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1581 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1581 if recordout is not None:
1582 if recordout is not None:
1582 recordout(repo.ui.popbuffer())
1583 recordout(repo.ui.popbuffer())
1583 return r
1584 return r
1584
1585
1585 def _maybeapplyclonebundle(pullop):
1586 def _maybeapplyclonebundle(pullop):
1586 """Apply a clone bundle from a remote, if possible."""
1587 """Apply a clone bundle from a remote, if possible."""
1587
1588
1588 repo = pullop.repo
1589 repo = pullop.repo
1589 remote = pullop.remote
1590 remote = pullop.remote
1590
1591
1591 if not repo.ui.configbool('experimental', 'clonebundles', False):
1592 if not repo.ui.configbool('experimental', 'clonebundles', False):
1592 return
1593 return
1593
1594
1594 if pullop.heads:
1595 if pullop.heads:
1595 return
1596 return
1596
1597
1597 if not remote.capable('clonebundles'):
1598 if not remote.capable('clonebundles'):
1598 return
1599 return
1599
1600
1600 res = remote._call('clonebundles')
1601 res = remote._call('clonebundles')
1601 entries = parseclonebundlesmanifest(res)
1602 entries = parseclonebundlesmanifest(res)
1602 if not entries:
1603 if not entries:
1603 repo.ui.note(_('no clone bundles available on remote; '
1604 repo.ui.note(_('no clone bundles available on remote; '
1604 'falling back to regular clone\n'))
1605 'falling back to regular clone\n'))
1605 return
1606 return
1606
1607
1607 entries = filterclonebundleentries(repo, entries)
1608 entries = filterclonebundleentries(repo, entries)
1608 if not entries:
1609 if not entries:
1609 # There is a thundering herd concern here. However, if a server
1610 # There is a thundering herd concern here. However, if a server
1610 # operator doesn't advertise bundles appropriate for its clients,
1611 # operator doesn't advertise bundles appropriate for its clients,
1611 # they deserve what's coming. Furthermore, from a client's
1612 # they deserve what's coming. Furthermore, from a client's
1612 # perspective, no automatic fallback would mean not being able to
1613 # perspective, no automatic fallback would mean not being able to
1613 # clone!
1614 # clone!
1614 repo.ui.warn(_('no compatible clone bundles available on server; '
1615 repo.ui.warn(_('no compatible clone bundles available on server; '
1615 'falling back to regular clone\n'))
1616 'falling back to regular clone\n'))
1616 repo.ui.warn(_('(you may want to report this to the server '
1617 repo.ui.warn(_('(you may want to report this to the server '
1617 'operator)\n'))
1618 'operator)\n'))
1618 return
1619 return
1619
1620
1620 # TODO sort entries by user preferences.
1621 # TODO sort entries by user preferences.
1621
1622
1622 url = entries[0]['URL']
1623 url = entries[0]['URL']
1623 repo.ui.status(_('applying clone bundle from %s\n') % url)
1624 repo.ui.status(_('applying clone bundle from %s\n') % url)
1624 if trypullbundlefromurl(repo.ui, repo, url):
1625 if trypullbundlefromurl(repo.ui, repo, url):
1625 repo.ui.status(_('finished applying clone bundle\n'))
1626 repo.ui.status(_('finished applying clone bundle\n'))
1626 # Bundle failed.
1627 # Bundle failed.
1627 #
1628 #
1628 # We abort by default to avoid the thundering herd of
1629 # We abort by default to avoid the thundering herd of
1629 # clients flooding a server that was expecting expensive
1630 # clients flooding a server that was expecting expensive
1630 # clone load to be offloaded.
1631 # clone load to be offloaded.
1631 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1632 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1632 repo.ui.warn(_('falling back to normal clone\n'))
1633 repo.ui.warn(_('falling back to normal clone\n'))
1633 else:
1634 else:
1634 raise error.Abort(_('error applying bundle'),
1635 raise error.Abort(_('error applying bundle'),
1635 hint=_('consider contacting the server '
1636 hint=_('consider contacting the server '
1636 'operator if this error persists'))
1637 'operator if this error persists'))
1637
1638
1638 def parseclonebundlesmanifest(s):
1639 def parseclonebundlesmanifest(s):
1639 """Parses the raw text of a clone bundles manifest.
1640 """Parses the raw text of a clone bundles manifest.
1640
1641
1641 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1642 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1642 to the URL and other keys are the attributes for the entry.
1643 to the URL and other keys are the attributes for the entry.
1643 """
1644 """
1644 m = []
1645 m = []
1645 for line in s.splitlines():
1646 for line in s.splitlines():
1646 fields = line.split()
1647 fields = line.split()
1647 if not fields:
1648 if not fields:
1648 continue
1649 continue
1649 attrs = {'URL': fields[0]}
1650 attrs = {'URL': fields[0]}
1650 for rawattr in fields[1:]:
1651 for rawattr in fields[1:]:
1651 key, value = rawattr.split('=', 1)
1652 key, value = rawattr.split('=', 1)
1652 attrs[urllib.unquote(key)] = urllib.unquote(value)
1653 attrs[urllib.unquote(key)] = urllib.unquote(value)
1653
1654
1654 m.append(attrs)
1655 m.append(attrs)
1655
1656
1656 return m
1657 return m
1657
1658
1658 def filterclonebundleentries(repo, entries):
1659 def filterclonebundleentries(repo, entries):
1659 newentries = []
1660 newentries = []
1660 for entry in entries:
1661 for entry in entries:
1661 spec = entry.get('BUNDLESPEC')
1662 spec = entry.get('BUNDLESPEC')
1662 if spec:
1663 if spec:
1663 try:
1664 try:
1664 parsebundlespec(repo, spec, strict=True)
1665 parsebundlespec(repo, spec, strict=True)
1665 except error.InvalidBundleSpecification as e:
1666 except error.InvalidBundleSpecification as e:
1666 repo.ui.debug(str(e) + '\n')
1667 repo.ui.debug(str(e) + '\n')
1667 continue
1668 continue
1668 except error.UnsupportedBundleSpecification as e:
1669 except error.UnsupportedBundleSpecification as e:
1669 repo.ui.debug('filtering %s because unsupported bundle '
1670 repo.ui.debug('filtering %s because unsupported bundle '
1670 'spec: %s\n' % (entry['URL'], str(e)))
1671 'spec: %s\n' % (entry['URL'], str(e)))
1671 continue
1672 continue
1672
1673
1674 if 'REQUIRESNI' in entry and not sslutil.hassni:
1675 repo.ui.debug('filtering %s because SNI not supported\n' %
1676 entry['URL'])
1677 continue
1678
1673 newentries.append(entry)
1679 newentries.append(entry)
1674
1680
1675 return newentries
1681 return newentries
1676
1682
1677 def trypullbundlefromurl(ui, repo, url):
1683 def trypullbundlefromurl(ui, repo, url):
1678 """Attempt to apply a bundle from a URL."""
1684 """Attempt to apply a bundle from a URL."""
1679 lock = repo.lock()
1685 lock = repo.lock()
1680 try:
1686 try:
1681 tr = repo.transaction('bundleurl')
1687 tr = repo.transaction('bundleurl')
1682 try:
1688 try:
1683 try:
1689 try:
1684 fh = urlmod.open(ui, url)
1690 fh = urlmod.open(ui, url)
1685 cg = readbundle(ui, fh, 'stream')
1691 cg = readbundle(ui, fh, 'stream')
1686
1692
1687 if isinstance(cg, bundle2.unbundle20):
1693 if isinstance(cg, bundle2.unbundle20):
1688 bundle2.processbundle(repo, cg, lambda: tr)
1694 bundle2.processbundle(repo, cg, lambda: tr)
1689 else:
1695 else:
1690 changegroup.addchangegroup(repo, cg, 'clonebundles', url)
1696 changegroup.addchangegroup(repo, cg, 'clonebundles', url)
1691 tr.close()
1697 tr.close()
1692 return True
1698 return True
1693 except urllib2.HTTPError as e:
1699 except urllib2.HTTPError as e:
1694 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
1700 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
1695 except urllib2.URLError as e:
1701 except urllib2.URLError as e:
1696 ui.warn(_('error fetching bundle: %s\n') % e.reason)
1702 ui.warn(_('error fetching bundle: %s\n') % e.reason)
1697
1703
1698 return False
1704 return False
1699 finally:
1705 finally:
1700 tr.release()
1706 tr.release()
1701 finally:
1707 finally:
1702 lock.release()
1708 lock.release()
@@ -1,229 +1,263 b''
1 Set up a server
1 Set up a server
2
2
3 $ hg init server
3 $ hg init server
4 $ cd server
4 $ cd server
5 $ cat >> .hg/hgrc << EOF
5 $ cat >> .hg/hgrc << EOF
6 > [extensions]
6 > [extensions]
7 > clonebundles =
7 > clonebundles =
8 > EOF
8 > EOF
9
9
10 $ touch foo
10 $ touch foo
11 $ hg -q commit -A -m 'add foo'
11 $ hg -q commit -A -m 'add foo'
12 $ touch bar
12 $ touch bar
13 $ hg -q commit -A -m 'add bar'
13 $ hg -q commit -A -m 'add bar'
14
14
15 $ hg serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log
15 $ hg serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log
16 $ cat hg.pid >> $DAEMON_PIDS
16 $ cat hg.pid >> $DAEMON_PIDS
17 $ cd ..
17 $ cd ..
18
18
19 Feature disabled by default
19 Feature disabled by default
20 (client should not request manifest)
20 (client should not request manifest)
21
21
22 $ hg clone -U http://localhost:$HGPORT feature-disabled
22 $ hg clone -U http://localhost:$HGPORT feature-disabled
23 requesting all changes
23 requesting all changes
24 adding changesets
24 adding changesets
25 adding manifests
25 adding manifests
26 adding file changes
26 adding file changes
27 added 2 changesets with 2 changes to 2 files
27 added 2 changesets with 2 changes to 2 files
28
28
29 $ cat server/access.log
29 $ cat server/access.log
30 * - - [*] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
30 * - - [*] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
31 * - - [*] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D (glob)
31 * - - [*] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D (glob)
32 * - - [*] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=aaff8d2ffbbf07a46dd1f05d8ae7877e3f56e2a2&listkeys=phase%2Cbookmarks (glob)
32 * - - [*] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=aaff8d2ffbbf07a46dd1f05d8ae7877e3f56e2a2&listkeys=phase%2Cbookmarks (glob)
33 * - - [*] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases (glob)
33 * - - [*] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases (glob)
34
34
35 $ cat >> $HGRCPATH << EOF
35 $ cat >> $HGRCPATH << EOF
36 > [experimental]
36 > [experimental]
37 > clonebundles = true
37 > clonebundles = true
38 > EOF
38 > EOF
39
39
40 Missing manifest should not result in server lookup
40 Missing manifest should not result in server lookup
41
41
42 $ hg --verbose clone -U http://localhost:$HGPORT no-manifest
42 $ hg --verbose clone -U http://localhost:$HGPORT no-manifest
43 requesting all changes
43 requesting all changes
44 adding changesets
44 adding changesets
45 adding manifests
45 adding manifests
46 adding file changes
46 adding file changes
47 added 2 changesets with 2 changes to 2 files
47 added 2 changesets with 2 changes to 2 files
48
48
49 $ tail -4 server/access.log
49 $ tail -4 server/access.log
50 * - - [*] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
50 * - - [*] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
51 * - - [*] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D (glob)
51 * - - [*] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D (glob)
52 * - - [*] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=aaff8d2ffbbf07a46dd1f05d8ae7877e3f56e2a2&listkeys=phase%2Cbookmarks (glob)
52 * - - [*] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=aaff8d2ffbbf07a46dd1f05d8ae7877e3f56e2a2&listkeys=phase%2Cbookmarks (glob)
53 * - - [*] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases (glob)
53 * - - [*] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases (glob)
54
54
55 Empty manifest file results in retrieval
55 Empty manifest file results in retrieval
56 (the extension only checks if the manifest file exists)
56 (the extension only checks if the manifest file exists)
57
57
58 $ touch server/.hg/clonebundles.manifest
58 $ touch server/.hg/clonebundles.manifest
59 $ hg --verbose clone -U http://localhost:$HGPORT empty-manifest
59 $ hg --verbose clone -U http://localhost:$HGPORT empty-manifest
60 no clone bundles available on remote; falling back to regular clone
60 no clone bundles available on remote; falling back to regular clone
61 requesting all changes
61 requesting all changes
62 adding changesets
62 adding changesets
63 adding manifests
63 adding manifests
64 adding file changes
64 adding file changes
65 added 2 changesets with 2 changes to 2 files
65 added 2 changesets with 2 changes to 2 files
66
66
67 Manifest file with invalid URL aborts
67 Manifest file with invalid URL aborts
68
68
69 $ echo 'http://does.not.exist/bundle.hg' > server/.hg/clonebundles.manifest
69 $ echo 'http://does.not.exist/bundle.hg' > server/.hg/clonebundles.manifest
70 $ hg clone http://localhost:$HGPORT 404-url
70 $ hg clone http://localhost:$HGPORT 404-url
71 applying clone bundle from http://does.not.exist/bundle.hg
71 applying clone bundle from http://does.not.exist/bundle.hg
72 error fetching bundle: [Errno -2] Name or service not known
72 error fetching bundle: [Errno -2] Name or service not known
73 abort: error applying bundle
73 abort: error applying bundle
74 (consider contacting the server operator if this error persists)
74 (consider contacting the server operator if this error persists)
75 [255]
75 [255]
76
76
77 Server is not running aborts
77 Server is not running aborts
78
78
79 $ echo "http://localhost:$HGPORT1/bundle.hg" > server/.hg/clonebundles.manifest
79 $ echo "http://localhost:$HGPORT1/bundle.hg" > server/.hg/clonebundles.manifest
80 $ hg clone http://localhost:$HGPORT server-not-runner
80 $ hg clone http://localhost:$HGPORT server-not-runner
81 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
81 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
82 error fetching bundle: [Errno 111] Connection refused
82 error fetching bundle: [Errno 111] Connection refused
83 abort: error applying bundle
83 abort: error applying bundle
84 (consider contacting the server operator if this error persists)
84 (consider contacting the server operator if this error persists)
85 [255]
85 [255]
86
86
87 Server returns 404
87 Server returns 404
88
88
89 $ python $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid
89 $ python $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid
90 $ cat http.pid >> $DAEMON_PIDS
90 $ cat http.pid >> $DAEMON_PIDS
91 $ hg clone http://localhost:$HGPORT running-404
91 $ hg clone http://localhost:$HGPORT running-404
92 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
92 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
93 HTTP error fetching bundle: HTTP Error 404: File not found
93 HTTP error fetching bundle: HTTP Error 404: File not found
94 abort: error applying bundle
94 abort: error applying bundle
95 (consider contacting the server operator if this error persists)
95 (consider contacting the server operator if this error persists)
96 [255]
96 [255]
97
97
98 We can override failure to fall back to regular clone
98 We can override failure to fall back to regular clone
99
99
100 $ hg --config ui.clonebundlefallback=true clone -U http://localhost:$HGPORT 404-fallback
100 $ hg --config ui.clonebundlefallback=true clone -U http://localhost:$HGPORT 404-fallback
101 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
101 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
102 HTTP error fetching bundle: HTTP Error 404: File not found
102 HTTP error fetching bundle: HTTP Error 404: File not found
103 falling back to normal clone
103 falling back to normal clone
104 requesting all changes
104 requesting all changes
105 adding changesets
105 adding changesets
106 adding manifests
106 adding manifests
107 adding file changes
107 adding file changes
108 added 2 changesets with 2 changes to 2 files
108 added 2 changesets with 2 changes to 2 files
109
109
110 Bundle with partial content works
110 Bundle with partial content works
111
111
112 $ hg -R server bundle --type gzip-v1 --base null -r 53245c60e682 partial.hg
112 $ hg -R server bundle --type gzip-v1 --base null -r 53245c60e682 partial.hg
113 1 changesets found
113 1 changesets found
114
114
115 We verify exact bundle content as an extra check against accidental future
115 We verify exact bundle content as an extra check against accidental future
116 changes. If this output changes, we could break old clients.
116 changes. If this output changes, we could break old clients.
117
117
118 $ f --size --hexdump partial.hg
118 $ f --size --hexdump partial.hg
119 partial.hg: size=208
119 partial.hg: size=208
120 0000: 48 47 31 30 47 5a 78 9c 63 60 60 98 17 ac 12 93 |HG10GZx.c``.....|
120 0000: 48 47 31 30 47 5a 78 9c 63 60 60 98 17 ac 12 93 |HG10GZx.c``.....|
121 0010: f0 ac a9 23 45 70 cb bf 0d 5f 59 4e 4a 7f 79 21 |...#Ep..._YNJ.y!|
121 0010: f0 ac a9 23 45 70 cb bf 0d 5f 59 4e 4a 7f 79 21 |...#Ep..._YNJ.y!|
122 0020: 9b cc 40 24 20 a0 d7 ce 2c d1 38 25 cd 24 25 d5 |..@$ ...,.8%.$%.|
122 0020: 9b cc 40 24 20 a0 d7 ce 2c d1 38 25 cd 24 25 d5 |..@$ ...,.8%.$%.|
123 0030: d8 c2 22 cd 38 d9 24 cd 22 d5 c8 22 cd 24 cd 32 |..".8.$."..".$.2|
123 0030: d8 c2 22 cd 38 d9 24 cd 22 d5 c8 22 cd 24 cd 32 |..".8.$."..".$.2|
124 0040: d1 c2 d0 c4 c8 d2 32 d1 38 39 29 c9 34 cd d4 80 |......2.89).4...|
124 0040: d1 c2 d0 c4 c8 d2 32 d1 38 39 29 c9 34 cd d4 80 |......2.89).4...|
125 0050: ab 24 b5 b8 84 cb 40 c1 80 2b 2d 3f 9f 8b 2b 31 |.$....@..+-?..+1|
125 0050: ab 24 b5 b8 84 cb 40 c1 80 2b 2d 3f 9f 8b 2b 31 |.$....@..+-?..+1|
126 0060: 25 45 01 c8 80 9a d2 9b 65 fb e5 9e 45 bf 8d 7f |%E......e...E...|
126 0060: 25 45 01 c8 80 9a d2 9b 65 fb e5 9e 45 bf 8d 7f |%E......e...E...|
127 0070: 9f c6 97 9f 2b 44 34 67 d9 ec 8e 0f a0 92 0b 75 |....+D4g.......u|
127 0070: 9f c6 97 9f 2b 44 34 67 d9 ec 8e 0f a0 92 0b 75 |....+D4g.......u|
128 0080: 41 d6 24 59 18 a4 a4 9a a6 18 1a 5b 98 9b 5a 98 |A.$Y.......[..Z.|
128 0080: 41 d6 24 59 18 a4 a4 9a a6 18 1a 5b 98 9b 5a 98 |A.$Y.......[..Z.|
129 0090: 9a 18 26 9b a6 19 98 1a 99 99 26 a6 18 9a 98 24 |..&.......&....$|
129 0090: 9a 18 26 9b a6 19 98 1a 99 99 26 a6 18 9a 98 24 |..&.......&....$|
130 00a0: 26 59 a6 25 5a 98 a5 18 a6 24 71 41 35 b1 43 dc |&Y.%Z....$qA5.C.|
130 00a0: 26 59 a6 25 5a 98 a5 18 a6 24 71 41 35 b1 43 dc |&Y.%Z....$qA5.C.|
131 00b0: 96 b0 83 f7 e9 45 8b d2 56 c7 a3 1f 82 52 d7 8a |.....E..V....R..|
131 00b0: 96 b0 83 f7 e9 45 8b d2 56 c7 a3 1f 82 52 d7 8a |.....E..V....R..|
132 00c0: 78 ed fc d5 76 f1 36 95 dc 05 07 00 ad 39 5e d3 |x...v.6......9^.|
132 00c0: 78 ed fc d5 76 f1 36 95 dc 05 07 00 ad 39 5e d3 |x...v.6......9^.|
133
133
134 $ echo "http://localhost:$HGPORT1/partial.hg" > server/.hg/clonebundles.manifest
134 $ echo "http://localhost:$HGPORT1/partial.hg" > server/.hg/clonebundles.manifest
135 $ hg clone -U http://localhost:$HGPORT partial-bundle
135 $ hg clone -U http://localhost:$HGPORT partial-bundle
136 applying clone bundle from http://localhost:$HGPORT1/partial.hg
136 applying clone bundle from http://localhost:$HGPORT1/partial.hg
137 adding changesets
137 adding changesets
138 adding manifests
138 adding manifests
139 adding file changes
139 adding file changes
140 added 1 changesets with 1 changes to 1 files
140 added 1 changesets with 1 changes to 1 files
141 finished applying clone bundle
141 finished applying clone bundle
142 searching for changes
142 searching for changes
143 adding changesets
143 adding changesets
144 adding manifests
144 adding manifests
145 adding file changes
145 adding file changes
146 added 1 changesets with 1 changes to 1 files
146 added 1 changesets with 1 changes to 1 files
147
147
148 Bundle with full content works
148 Bundle with full content works
149
149
150 $ hg -R server bundle --type gzip-v2 --base null -r tip full.hg
150 $ hg -R server bundle --type gzip-v2 --base null -r tip full.hg
151 2 changesets found
151 2 changesets found
152
152
153 Again, we perform an extra check against bundle content changes. If this content
153 Again, we perform an extra check against bundle content changes. If this content
154 changes, clone bundles produced by new Mercurial versions may not be readable
154 changes, clone bundles produced by new Mercurial versions may not be readable
155 by old clients.
155 by old clients.
156
156
157 $ f --size --hexdump full.hg
157 $ f --size --hexdump full.hg
158 full.hg: size=408
158 full.hg: size=408
159 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress|
159 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress|
160 0010: 69 6f 6e 3d 47 5a 78 9c 63 60 60 90 e5 76 f6 70 |ion=GZx.c``..v.p|
160 0010: 69 6f 6e 3d 47 5a 78 9c 63 60 60 90 e5 76 f6 70 |ion=GZx.c``..v.p|
161 0020: f4 73 77 75 0f f2 0f 0d 60 00 02 46 06 76 a6 b2 |.swu....`..F.v..|
161 0020: f4 73 77 75 0f f2 0f 0d 60 00 02 46 06 76 a6 b2 |.swu....`..F.v..|
162 0030: d4 a2 e2 cc fc 3c 03 23 06 06 e6 7d 40 b1 4d c1 |.....<.#...}@.M.|
162 0030: d4 a2 e2 cc fc 3c 03 23 06 06 e6 7d 40 b1 4d c1 |.....<.#...}@.M.|
163 0040: 2a 31 09 cf 9a 3a 52 04 b7 fc db f0 95 e5 a4 f4 |*1...:R.........|
163 0040: 2a 31 09 cf 9a 3a 52 04 b7 fc db f0 95 e5 a4 f4 |*1...:R.........|
164 0050: 97 17 b2 c9 0c 14 00 02 e6 d9 99 25 1a a7 a4 99 |...........%....|
164 0050: 97 17 b2 c9 0c 14 00 02 e6 d9 99 25 1a a7 a4 99 |...........%....|
165 0060: a4 a4 1a 5b 58 a4 19 27 9b a4 59 a4 1a 59 a4 99 |...[X..'..Y..Y..|
165 0060: a4 a4 1a 5b 58 a4 19 27 9b a4 59 a4 1a 59 a4 99 |...[X..'..Y..Y..|
166 0070: a4 59 26 5a 18 9a 18 59 5a 26 1a 27 27 25 99 a6 |.Y&Z...YZ&.''%..|
166 0070: a4 59 26 5a 18 9a 18 59 5a 26 1a 27 27 25 99 a6 |.Y&Z...YZ&.''%..|
167 0080: 99 1a 70 95 a4 16 97 70 19 28 18 70 a5 e5 e7 73 |..p....p.(.p...s|
167 0080: 99 1a 70 95 a4 16 97 70 19 28 18 70 a5 e5 e7 73 |..p....p.(.p...s|
168 0090: 71 25 a6 a4 28 00 19 40 13 0e ac fa df ab ff 7b |q%..(..@.......{|
168 0090: 71 25 a6 a4 28 00 19 40 13 0e ac fa df ab ff 7b |q%..(..@.......{|
169 00a0: 3f fb 92 dc 8b 1f 62 bb 9e b7 d7 d9 87 3d 5a 44 |?.....b......=ZD|
169 00a0: 3f fb 92 dc 8b 1f 62 bb 9e b7 d7 d9 87 3d 5a 44 |?.....b......=ZD|
170 00b0: ac 2f b0 a9 c3 66 1e 54 b9 26 08 a7 1a 1b 1a a7 |./...f.T.&......|
170 00b0: ac 2f b0 a9 c3 66 1e 54 b9 26 08 a7 1a 1b 1a a7 |./...f.T.&......|
171 00c0: 25 1b 9a 1b 99 19 9a 5a 18 9b a6 18 19 00 dd 67 |%......Z.......g|
171 00c0: 25 1b 9a 1b 99 19 9a 5a 18 9b a6 18 19 00 dd 67 |%......Z.......g|
172 00d0: 61 61 98 06 f4 80 49 4a 8a 65 52 92 41 9a 81 81 |aa....IJ.eR.A...|
172 00d0: 61 61 98 06 f4 80 49 4a 8a 65 52 92 41 9a 81 81 |aa....IJ.eR.A...|
173 00e0: a5 11 17 50 31 30 58 19 cc 80 98 25 29 b1 08 c4 |...P10X....%)...|
173 00e0: a5 11 17 50 31 30 58 19 cc 80 98 25 29 b1 08 c4 |...P10X....%)...|
174 00f0: 37 07 79 19 88 d9 41 ee 07 8a 41 cd 5d 98 65 fb |7.y...A...A.].e.|
174 00f0: 37 07 79 19 88 d9 41 ee 07 8a 41 cd 5d 98 65 fb |7.y...A...A.].e.|
175 0100: e5 9e 45 bf 8d 7f 9f c6 97 9f 2b 44 34 67 d9 ec |..E.......+D4g..|
175 0100: e5 9e 45 bf 8d 7f 9f c6 97 9f 2b 44 34 67 d9 ec |..E.......+D4g..|
176 0110: 8e 0f a0 61 a8 eb 82 82 2e c9 c2 20 25 d5 34 c5 |...a....... %.4.|
176 0110: 8e 0f a0 61 a8 eb 82 82 2e c9 c2 20 25 d5 34 c5 |...a....... %.4.|
177 0120: d0 d8 c2 dc d4 c2 d4 c4 30 d9 34 cd c0 d4 c8 cc |........0.4.....|
177 0120: d0 d8 c2 dc d4 c2 d4 c4 30 d9 34 cd c0 d4 c8 cc |........0.4.....|
178 0130: 34 31 c5 d0 c4 24 31 c9 32 2d d1 c2 2c c5 30 25 |41...$1.2-..,.0%|
178 0130: 34 31 c5 d0 c4 24 31 c9 32 2d d1 c2 2c c5 30 25 |41...$1.2-..,.0%|
179 0140: 09 e4 ee 85 8f 85 ff 88 ab 89 36 c7 2a c4 47 34 |..........6.*.G4|
179 0140: 09 e4 ee 85 8f 85 ff 88 ab 89 36 c7 2a c4 47 34 |..........6.*.G4|
180 0150: fe f8 ec 7b 73 37 3f c3 24 62 1d 8d 4d 1d 9e 40 |...{s7?.$b..M..@|
180 0150: fe f8 ec 7b 73 37 3f c3 24 62 1d 8d 4d 1d 9e 40 |...{s7?.$b..M..@|
181 0160: 06 3b 10 14 36 a4 38 10 04 d8 21 01 5a b2 83 f7 |.;..6.8...!.Z...|
181 0160: 06 3b 10 14 36 a4 38 10 04 d8 21 01 5a b2 83 f7 |.;..6.8...!.Z...|
182 0170: e9 45 8b d2 56 c7 a3 1f 82 52 d7 8a 78 ed fc d5 |.E..V....R..x...|
182 0170: e9 45 8b d2 56 c7 a3 1f 82 52 d7 8a 78 ed fc d5 |.E..V....R..x...|
183 0180: 76 f1 36 25 81 49 c0 ad 30 c0 0e 49 8f 54 b7 9e |v.6%.I..0..I.T..|
183 0180: 76 f1 36 25 81 49 c0 ad 30 c0 0e 49 8f 54 b7 9e |v.6%.I..0..I.T..|
184 0190: d4 1c 09 00 bb 8d f0 bd |........|
184 0190: d4 1c 09 00 bb 8d f0 bd |........|
185
185
186 $ echo "http://localhost:$HGPORT1/full.hg" > server/.hg/clonebundles.manifest
186 $ echo "http://localhost:$HGPORT1/full.hg" > server/.hg/clonebundles.manifest
187 $ hg clone -U http://localhost:$HGPORT full-bundle
187 $ hg clone -U http://localhost:$HGPORT full-bundle
188 applying clone bundle from http://localhost:$HGPORT1/full.hg
188 applying clone bundle from http://localhost:$HGPORT1/full.hg
189 adding changesets
189 adding changesets
190 adding manifests
190 adding manifests
191 adding file changes
191 adding file changes
192 added 2 changesets with 2 changes to 2 files
192 added 2 changesets with 2 changes to 2 files
193 finished applying clone bundle
193 finished applying clone bundle
194 searching for changes
194 searching for changes
195 no changes found
195 no changes found
196
196
197 Entry with unknown BUNDLESPEC is filtered and not used
197 Entry with unknown BUNDLESPEC is filtered and not used
198
198
199 $ cat > server/.hg/clonebundles.manifest << EOF
199 $ cat > server/.hg/clonebundles.manifest << EOF
200 > http://bad.entry1 BUNDLESPEC=UNKNOWN
200 > http://bad.entry1 BUNDLESPEC=UNKNOWN
201 > http://bad.entry2 BUNDLESPEC=xz-v1
201 > http://bad.entry2 BUNDLESPEC=xz-v1
202 > http://bad.entry3 BUNDLESPEC=none-v100
202 > http://bad.entry3 BUNDLESPEC=none-v100
203 > http://localhost:$HGPORT1/full.hg BUNDLESPEC=gzip-v2
203 > http://localhost:$HGPORT1/full.hg BUNDLESPEC=gzip-v2
204 > EOF
204 > EOF
205
205
206 $ hg clone -U http://localhost:$HGPORT filter-unknown-type
206 $ hg clone -U http://localhost:$HGPORT filter-unknown-type
207 applying clone bundle from http://localhost:$HGPORT1/full.hg
207 applying clone bundle from http://localhost:$HGPORT1/full.hg
208 adding changesets
208 adding changesets
209 adding manifests
209 adding manifests
210 adding file changes
210 adding file changes
211 added 2 changesets with 2 changes to 2 files
211 added 2 changesets with 2 changes to 2 files
212 finished applying clone bundle
212 finished applying clone bundle
213 searching for changes
213 searching for changes
214 no changes found
214 no changes found
215
215
216 Automatic fallback when all entries are filtered
216 Automatic fallback when all entries are filtered
217
217
218 $ cat > server/.hg/clonebundles.manifest << EOF
218 $ cat > server/.hg/clonebundles.manifest << EOF
219 > http://bad.entry BUNDLESPEC=UNKNOWN
219 > http://bad.entry BUNDLESPEC=UNKNOWN
220 > EOF
220 > EOF
221
221
222 $ hg clone -U http://localhost:$HGPORT filter-all
222 $ hg clone -U http://localhost:$HGPORT filter-all
223 no compatible clone bundles available on server; falling back to regular clone
223 no compatible clone bundles available on server; falling back to regular clone
224 (you may want to report this to the server operator)
224 (you may want to report this to the server operator)
225 requesting all changes
225 requesting all changes
226 adding changesets
226 adding changesets
227 adding manifests
227 adding manifests
228 adding file changes
228 adding file changes
229 added 2 changesets with 2 changes to 2 files
229 added 2 changesets with 2 changes to 2 files
230
231 URLs requiring SNI are filtered in Python <2.7.9
232
233 $ cp full.hg sni.hg
234 $ cat > server/.hg/clonebundles.manifest << EOF
235 > http://localhost:$HGPORT1/sni.hg REQUIRESNI=true
236 > http://localhost:$HGPORT1/full.hg
237 > EOF
238
239 #if sslcontext
240 Python 2.7.9+ support SNI
241
242 $ hg clone -U http://localhost:$HGPORT sni-supported
243 applying clone bundle from http://localhost:$HGPORT1/sni.hg
244 adding changesets
245 adding manifests
246 adding file changes
247 added 2 changesets with 2 changes to 2 files
248 finished applying clone bundle
249 searching for changes
250 no changes found
251 #else
252 Python <2.7.9 will filter SNI URLs
253
254 $ hg clone -U http://localhost:$HGPORT sni-unsupported
255 applying clone bundle from http://localhost:$HGPORT1/full.hg
256 adding changesets
257 adding manifests
258 adding file changes
259 added 2 changesets with 2 changes to 2 files
260 finished applying clone bundle
261 searching for changes
262 no changes found
263 #endif
General Comments 0
You need to be logged in to leave comments. Login now