##// END OF EJS Templates
exchange: add fast path for subrepo check on push...
Joerg Sonnenberger -
r49381:28f0092e default
parent child Browse files
Show More
@@ -1,109 +1,111 b''
1 remotefilelog
1 remotefilelog
2 =============
2 =============
3
3
4 The remotefilelog extension allows Mercurial to clone shallow copies of a repository such that all file contents are left on the server and only downloaded on demand by the client. This greatly speeds up clone and pull performance for repositories that have long histories or that are growing quickly.
4 The remotefilelog extension allows Mercurial to clone shallow copies of a repository such that all file contents are left on the server and only downloaded on demand by the client. This greatly speeds up clone and pull performance for repositories that have long histories or that are growing quickly.
5
5
6 In addition, the extension allows using a caching layer (such as memcache) to serve the file contents, thus providing better scalability and reducing server load.
6 In addition, the extension allows using a caching layer (such as memcache) to serve the file contents, thus providing better scalability and reducing server load.
7
7
8 Installing
8 Installing
9 ==========
9 ==========
10
10
11 **NOTE:** See the limitations section below to check if remotefilelog will work for your use case.
11 **NOTE:** See the limitations section below to check if remotefilelog will work for your use case.
12
12
13 remotefilelog can be installed like any other Mercurial extension. Download the source code and add the remotefilelog subdirectory to your `hgrc`:
13 remotefilelog can be installed like any other Mercurial extension. Download the source code and add the remotefilelog subdirectory to your `hgrc`:
14
14
15 :::ini
15 :::ini
16 [extensions]
16 [extensions]
17 remotefilelog=path/to/remotefilelog/remotefilelog
17 remotefilelog=path/to/remotefilelog/remotefilelog
18
18
19 Configuring
19 Configuring
20 -----------
20 -----------
21
21
22 **Server**
22 **Server**
23
23
24 * `server` (required) - Set to 'True' to indicate that the server can serve shallow clones.
24 * `server` (required) - Set to 'True' to indicate that the server can serve shallow clones.
25 * `serverexpiration` - The server keeps a local cache of recently requested file revision blobs in .hg/remotefilelogcache. This setting specifies how many days they should be kept locally. Defaults to 30.
25 * `serverexpiration` - The server keeps a local cache of recently requested file revision blobs in .hg/remotefilelogcache. This setting specifies how many days they should be kept locally. Defaults to 30.
26
26
27 An example server configuration:
27 An example server configuration:
28
28
29 :::ini
29 :::ini
30 [remotefilelog]
30 [remotefilelog]
31 server = True
31 server = True
32 serverexpiration = 14
32 serverexpiration = 14
33
33
34 **Client**
34 **Client**
35
35
36 * `cachepath` (required) - the location to store locally cached file revisions
36 * `cachepath` (required) - the location to store locally cached file revisions
37 * `cachelimit` - the maximum size of the cachepath. By default it's 1000 GB.
37 * `cachelimit` - the maximum size of the cachepath. By default it's 1000 GB.
38 * `cachegroup` - the default unix group for the cachepath. Useful on shared systems so multiple users can read and write to the same cache.
38 * `cachegroup` - the default unix group for the cachepath. Useful on shared systems so multiple users can read and write to the same cache.
39 * `cacheprocess` - the external process that will handle the remote caching layer. If not set, all requests will go to the Mercurial server.
39 * `cacheprocess` - the external process that will handle the remote caching layer. If not set, all requests will go to the Mercurial server.
40 * `fallbackpath` - the Mercurial repo path to fetch file revisions from. By default it uses the paths.default repo. This setting is useful for cloning from shallow clones and still talking to the central server for file revisions.
40 * `fallbackpath` - the Mercurial repo path to fetch file revisions from. By default it uses the paths.default repo. This setting is useful for cloning from shallow clones and still talking to the central server for file revisions.
41 * `includepattern` - a list of regex patterns matching files that should be kept remotely. Defaults to all files.
41 * `includepattern` - a list of regex patterns matching files that should be kept remotely. Defaults to all files.
42 * `excludepattern` - a list of regex patterns matching files that should not be kept remotely and should always be downloaded.
42 * `excludepattern` - a list of regex patterns matching files that should not be kept remotely and should always be downloaded.
43 * `pullprefetch` - a revset of commits whose file content should be prefetched after every pull. The most common value for this will be '(bookmark() + head()) & public()'. This is useful in environments where offline work is common, since it will enable offline updating to, rebasing to, and committing on every head and bookmark.
43 * `pullprefetch` - a revset of commits whose file content should be prefetched after every pull. The most common value for this will be '(bookmark() + head()) & public()'. This is useful in environments where offline work is common, since it will enable offline updating to, rebasing to, and committing on every head and bookmark.
44
44
45 An example client configuration:
45 An example client configuration:
46
46
47 :::ini
47 :::ini
48 [remotefilelog]
48 [remotefilelog]
49 cachepath = /dev/shm/hgcache
49 cachepath = /dev/shm/hgcache
50 cachelimit = 2 GB
50 cachelimit = 2 GB
51
51
52 Using as a largefiles replacement
52 Using as a largefiles replacement
53 ---------------------------------
53 ---------------------------------
54
54
55 remotefilelog can theoretically be used as a replacement for the largefiles extension. You can use the `includepattern` setting to specify which directories or file types are considered large and they will be left on the server. Unlike the largefiles extension, this can be done without converting the server repository. Only the client configuration needs to specify the patterns.
55 remotefilelog can theoretically be used as a replacement for the largefiles extension. You can use the `includepattern` setting to specify which directories or file types are considered large and they will be left on the server. Unlike the largefiles extension, this can be done without converting the server repository. Only the client configuration needs to specify the patterns.
56
56
57 The include/exclude settings haven't been extensively tested, so this feature is still considered experimental.
57 The include/exclude settings haven't been extensively tested, so this feature is still considered experimental.
58
58
59 An example largefiles style client configuration:
59 An example largefiles style client configuration:
60
60
61 :::ini
61 :::ini
62 [remotefilelog]
62 [remotefilelog]
63 cachepath = /dev/shm/hgcache
63 cachepath = /dev/shm/hgcache
64 cachelimit = 2 GB
64 cachelimit = 2 GB
65 includepattern = *.sql3
65 includepattern = *.sql3
66 bin/*
66 bin/*
67
67
68 Usage
68 Usage
69 =====
69 =====
70
70
71 Once you have configured the server, you can get a shallow clone by doing:
71 Once you have configured the server, you can get a shallow clone by doing:
72
72
73 :::bash
73 :::bash
74 hg clone --shallow ssh://server//path/repo
74 hg clone --shallow ssh://server//path/repo
75
75
76 After that, all normal mercurial commands should work.
76 After that, all normal mercurial commands should work.
77
77
78 Occasionly the client or server caches may grow too big. Run `hg gc` to clean up the cache. It will remove cached files that appear to no longer be necessary, or any files that exceed the configured maximum size. This does not improve performance; it just frees up space.
78 Occasionly the client or server caches may grow too big. Run `hg gc` to clean up the cache. It will remove cached files that appear to no longer be necessary, or any files that exceed the configured maximum size. This does not improve performance; it just frees up space.
79
79
80 Limitations
80 Limitations
81 ===========
81 ===========
82
82
83 1. The extension must be used with Mercurial 3.3 (commit d7d08337b3f6) or higher (earlier versions of the extension work with earlier versions of Mercurial though, up to Mercurial 2.7).
83 1. The extension must be used with Mercurial 3.3 (commit d7d08337b3f6) or higher (earlier versions of the extension work with earlier versions of Mercurial though, up to Mercurial 2.7).
84
84
85 2. remotefilelog has only been tested on linux with case-sensitive filesystems. It should work on other unix systems but may have problems on case-insensitive filesystems.
85 2. remotefilelog has only been tested on linux with case-sensitive filesystems. It should work on other unix systems but may have problems on case-insensitive filesystems.
86
86
87 3. remotefilelog only works with ssh based Mercurial repos. http based repos are currently not supported, though it shouldn't be too difficult for some motivated individual to implement.
87 3. remotefilelog only works with ssh based Mercurial repos. http based repos are currently not supported, though it shouldn't be too difficult for some motivated individual to implement.
88
88
89 4. Tags are not supported in completely shallow repos. If you use tags in your repo you will have to specify `excludepattern=.hgtags` in your client configuration to ensure that file is downloaded. The include/excludepattern settings are experimental at the moment and have yet to be deployed in a production environment.
89 4. Tags are not supported in completely shallow repos. If you use tags in your repo you will have to specify `excludepattern=.hgtags` in your client configuration to ensure that file is downloaded. The include/excludepattern settings are experimental at the moment and have yet to be deployed in a production environment.
90
90
91 5. A few commands will be slower. `hg log <filename>` will be much slower since it has to walk the entire commit history instead of just the filelog. Use `hg log -f <filename>` instead, which remains very fast.
91 5. Similarly, subrepositories should not be used with completely shallow repos. Use `excludepattern=.hgsub*` in your client configuration to ensure that the files are downloaded.
92
93 6. A few commands will be slower. `hg log <filename>` will be much slower since it has to walk the entire commit history instead of just the filelog. Use `hg log -f <filename>` instead, which remains very fast.
92
94
93 Contributing
95 Contributing
94 ============
96 ============
95
97
96 Patches are welcome as pull requests, though they will be collapsed and rebased to maintain a linear history. Tests can be run via:
98 Patches are welcome as pull requests, though they will be collapsed and rebased to maintain a linear history. Tests can be run via:
97
99
98 :::bash
100 :::bash
99 cd tests
101 cd tests
100 ./run-tests --with-hg=path/to/hgrepo/hg
102 ./run-tests --with-hg=path/to/hgrepo/hg
101
103
102 We (Facebook) have to ask for a "Contributor License Agreement" from someone who sends in a patch or code that we want to include in the codebase. This is a legal requirement; a similar situation applies to Apache and other ASF projects.
104 We (Facebook) have to ask for a "Contributor License Agreement" from someone who sends in a patch or code that we want to include in the codebase. This is a legal requirement; a similar situation applies to Apache and other ASF projects.
103
105
104 If we ask you to fill out a CLA we'll direct you to our [online CLA page](https://developers.facebook.com/opensource/cla) where you can complete it easily. We use the same form as the Apache CLA so that friction is minimal.
106 If we ask you to fill out a CLA we'll direct you to our [online CLA page](https://developers.facebook.com/opensource/cla) where you can complete it easily. We use the same form as the Apache CLA so that friction is minimal.
105
107
106 License
108 License
107 =======
109 =======
108
110
109 remotefilelog is made available under the terms of the GNU General Public License version 2, or any later version. See the COPYING file that accompanies this distribution for the full text of the license.
111 remotefilelog is made available under the terms of the GNU General Public License version 2, or any later version. See the COPYING file that accompanies this distribution for the full text of the license.
@@ -1,504 +1,504 b''
1 # remotefilelog.py - filelog implementation where filelog history is stored
1 # remotefilelog.py - filelog implementation where filelog history is stored
2 # remotely
2 # remotely
3 #
3 #
4 # Copyright 2013 Facebook, Inc.
4 # Copyright 2013 Facebook, Inc.
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import collections
10 import collections
11 import os
11 import os
12
12
13 from mercurial.node import bin
13 from mercurial.node import bin
14 from mercurial.i18n import _
14 from mercurial.i18n import _
15 from mercurial import (
15 from mercurial import (
16 ancestor,
16 ancestor,
17 error,
17 error,
18 mdiff,
18 mdiff,
19 pycompat,
19 pycompat,
20 revlog,
20 revlog,
21 )
21 )
22 from mercurial.utils import storageutil
22 from mercurial.utils import storageutil
23 from mercurial.revlogutils import flagutil
23 from mercurial.revlogutils import flagutil
24
24
25 from . import (
25 from . import (
26 constants,
26 constants,
27 fileserverclient,
27 fileserverclient,
28 shallowutil,
28 shallowutil,
29 )
29 )
30
30
31
31
32 class remotefilelognodemap(object):
32 class remotefilelognodemap(object):
33 def __init__(self, filename, store):
33 def __init__(self, filename, store):
34 self._filename = filename
34 self._filename = filename
35 self._store = store
35 self._store = store
36
36
37 def __contains__(self, node):
37 def __contains__(self, node):
38 missing = self._store.getmissing([(self._filename, node)])
38 missing = self._store.getmissing([(self._filename, node)])
39 return not bool(missing)
39 return not bool(missing)
40
40
41 def __get__(self, node):
41 def __get__(self, node):
42 if node not in self:
42 if node not in self:
43 raise KeyError(node)
43 raise KeyError(node)
44 return node
44 return node
45
45
46
46
47 class remotefilelog(object):
47 class remotefilelog(object):
48
48
49 _generaldelta = True
49 _generaldelta = True
50 _flagserrorclass = error.RevlogError
50 _flagserrorclass = error.RevlogError
51
51
52 def __init__(self, opener, path, repo):
52 def __init__(self, opener, path, repo):
53 self.opener = opener
53 self.opener = opener
54 self.filename = path
54 self.filename = path
55 self.repo = repo
55 self.repo = repo
56 self.nodemap = remotefilelognodemap(self.filename, repo.contentstore)
56 self.nodemap = remotefilelognodemap(self.filename, repo.contentstore)
57
57
58 self.version = 1
58 self.version = 1
59
59
60 self._flagprocessors = dict(flagutil.flagprocessors)
60 self._flagprocessors = dict(flagutil.flagprocessors)
61
61
62 def read(self, node):
62 def read(self, node):
63 """returns the file contents at this node"""
63 """returns the file contents at this node"""
64 t = self.revision(node)
64 t = self.revision(node)
65 if not t.startswith(b'\1\n'):
65 if not t.startswith(b'\1\n'):
66 return t
66 return t
67 s = t.index(b'\1\n', 2)
67 s = t.index(b'\1\n', 2)
68 return t[s + 2 :]
68 return t[s + 2 :]
69
69
70 def add(self, text, meta, transaction, linknode, p1=None, p2=None):
70 def add(self, text, meta, transaction, linknode, p1=None, p2=None):
71 # hash with the metadata, like in vanilla filelogs
71 # hash with the metadata, like in vanilla filelogs
72 hashtext = shallowutil.createrevlogtext(
72 hashtext = shallowutil.createrevlogtext(
73 text, meta.get(b'copy'), meta.get(b'copyrev')
73 text, meta.get(b'copy'), meta.get(b'copyrev')
74 )
74 )
75 node = storageutil.hashrevisionsha1(hashtext, p1, p2)
75 node = storageutil.hashrevisionsha1(hashtext, p1, p2)
76 return self.addrevision(
76 return self.addrevision(
77 hashtext, transaction, linknode, p1, p2, node=node
77 hashtext, transaction, linknode, p1, p2, node=node
78 )
78 )
79
79
80 def _createfileblob(self, text, meta, flags, p1, p2, node, linknode):
80 def _createfileblob(self, text, meta, flags, p1, p2, node, linknode):
81 # text passed to "_createfileblob" does not include filelog metadata
81 # text passed to "_createfileblob" does not include filelog metadata
82 header = shallowutil.buildfileblobheader(len(text), flags)
82 header = shallowutil.buildfileblobheader(len(text), flags)
83 data = b"%s\0%s" % (header, text)
83 data = b"%s\0%s" % (header, text)
84
84
85 realp1 = p1
85 realp1 = p1
86 copyfrom = b""
86 copyfrom = b""
87 if meta and b'copy' in meta:
87 if meta and b'copy' in meta:
88 copyfrom = meta[b'copy']
88 copyfrom = meta[b'copy']
89 realp1 = bin(meta[b'copyrev'])
89 realp1 = bin(meta[b'copyrev'])
90
90
91 data += b"%s%s%s%s%s\0" % (node, realp1, p2, linknode, copyfrom)
91 data += b"%s%s%s%s%s\0" % (node, realp1, p2, linknode, copyfrom)
92
92
93 visited = set()
93 visited = set()
94
94
95 pancestors = {}
95 pancestors = {}
96 queue = []
96 queue = []
97 if realp1 != self.repo.nullid:
97 if realp1 != self.repo.nullid:
98 p1flog = self
98 p1flog = self
99 if copyfrom:
99 if copyfrom:
100 p1flog = remotefilelog(self.opener, copyfrom, self.repo)
100 p1flog = remotefilelog(self.opener, copyfrom, self.repo)
101
101
102 pancestors.update(p1flog.ancestormap(realp1))
102 pancestors.update(p1flog.ancestormap(realp1))
103 queue.append(realp1)
103 queue.append(realp1)
104 visited.add(realp1)
104 visited.add(realp1)
105 if p2 != self.repo.nullid:
105 if p2 != self.repo.nullid:
106 pancestors.update(self.ancestormap(p2))
106 pancestors.update(self.ancestormap(p2))
107 queue.append(p2)
107 queue.append(p2)
108 visited.add(p2)
108 visited.add(p2)
109
109
110 ancestortext = b""
110 ancestortext = b""
111
111
112 # add the ancestors in topological order
112 # add the ancestors in topological order
113 while queue:
113 while queue:
114 c = queue.pop(0)
114 c = queue.pop(0)
115 pa1, pa2, ancestorlinknode, pacopyfrom = pancestors[c]
115 pa1, pa2, ancestorlinknode, pacopyfrom = pancestors[c]
116
116
117 pacopyfrom = pacopyfrom or b''
117 pacopyfrom = pacopyfrom or b''
118 ancestortext += b"%s%s%s%s%s\0" % (
118 ancestortext += b"%s%s%s%s%s\0" % (
119 c,
119 c,
120 pa1,
120 pa1,
121 pa2,
121 pa2,
122 ancestorlinknode,
122 ancestorlinknode,
123 pacopyfrom,
123 pacopyfrom,
124 )
124 )
125
125
126 if pa1 != self.repo.nullid and pa1 not in visited:
126 if pa1 != self.repo.nullid and pa1 not in visited:
127 queue.append(pa1)
127 queue.append(pa1)
128 visited.add(pa1)
128 visited.add(pa1)
129 if pa2 != self.repo.nullid and pa2 not in visited:
129 if pa2 != self.repo.nullid and pa2 not in visited:
130 queue.append(pa2)
130 queue.append(pa2)
131 visited.add(pa2)
131 visited.add(pa2)
132
132
133 data += ancestortext
133 data += ancestortext
134
134
135 return data
135 return data
136
136
137 def addrevision(
137 def addrevision(
138 self,
138 self,
139 text,
139 text,
140 transaction,
140 transaction,
141 linknode,
141 linknode,
142 p1,
142 p1,
143 p2,
143 p2,
144 cachedelta=None,
144 cachedelta=None,
145 node=None,
145 node=None,
146 flags=revlog.REVIDX_DEFAULT_FLAGS,
146 flags=revlog.REVIDX_DEFAULT_FLAGS,
147 sidedata=None,
147 sidedata=None,
148 ):
148 ):
149 # text passed to "addrevision" includes hg filelog metadata header
149 # text passed to "addrevision" includes hg filelog metadata header
150 if node is None:
150 if node is None:
151 node = storageutil.hashrevisionsha1(text, p1, p2)
151 node = storageutil.hashrevisionsha1(text, p1, p2)
152
152
153 meta, metaoffset = storageutil.parsemeta(text)
153 meta, metaoffset = storageutil.parsemeta(text)
154 rawtext, validatehash = flagutil.processflagswrite(
154 rawtext, validatehash = flagutil.processflagswrite(
155 self,
155 self,
156 text,
156 text,
157 flags,
157 flags,
158 )
158 )
159 return self.addrawrevision(
159 return self.addrawrevision(
160 rawtext,
160 rawtext,
161 transaction,
161 transaction,
162 linknode,
162 linknode,
163 p1,
163 p1,
164 p2,
164 p2,
165 node,
165 node,
166 flags,
166 flags,
167 cachedelta,
167 cachedelta,
168 _metatuple=(meta, metaoffset),
168 _metatuple=(meta, metaoffset),
169 )
169 )
170
170
171 def addrawrevision(
171 def addrawrevision(
172 self,
172 self,
173 rawtext,
173 rawtext,
174 transaction,
174 transaction,
175 linknode,
175 linknode,
176 p1,
176 p1,
177 p2,
177 p2,
178 node,
178 node,
179 flags,
179 flags,
180 cachedelta=None,
180 cachedelta=None,
181 _metatuple=None,
181 _metatuple=None,
182 ):
182 ):
183 if _metatuple:
183 if _metatuple:
184 # _metatuple: used by "addrevision" internally by remotefilelog
184 # _metatuple: used by "addrevision" internally by remotefilelog
185 # meta was parsed confidently
185 # meta was parsed confidently
186 meta, metaoffset = _metatuple
186 meta, metaoffset = _metatuple
187 else:
187 else:
188 # not from self.addrevision, but something else (repo._filecommit)
188 # not from self.addrevision, but something else (repo._filecommit)
189 # calls addrawrevision directly. remotefilelog needs to get and
189 # calls addrawrevision directly. remotefilelog needs to get and
190 # strip filelog metadata.
190 # strip filelog metadata.
191 # we don't have confidence about whether rawtext contains filelog
191 # we don't have confidence about whether rawtext contains filelog
192 # metadata or not (flag processor could replace it), so we just
192 # metadata or not (flag processor could replace it), so we just
193 # parse it as best-effort.
193 # parse it as best-effort.
194 # in LFS (flags != 0)'s case, the best way is to call LFS code to
194 # in LFS (flags != 0)'s case, the best way is to call LFS code to
195 # get the meta information, instead of storageutil.parsemeta.
195 # get the meta information, instead of storageutil.parsemeta.
196 meta, metaoffset = storageutil.parsemeta(rawtext)
196 meta, metaoffset = storageutil.parsemeta(rawtext)
197 if flags != 0:
197 if flags != 0:
198 # when flags != 0, be conservative and do not mangle rawtext, since
198 # when flags != 0, be conservative and do not mangle rawtext, since
199 # a read flag processor expects the text not being mangled at all.
199 # a read flag processor expects the text not being mangled at all.
200 metaoffset = 0
200 metaoffset = 0
201 if metaoffset:
201 if metaoffset:
202 # remotefilelog fileblob stores copy metadata in its ancestortext,
202 # remotefilelog fileblob stores copy metadata in its ancestortext,
203 # not its main blob. so we need to remove filelog metadata
203 # not its main blob. so we need to remove filelog metadata
204 # (containing copy information) from text.
204 # (containing copy information) from text.
205 blobtext = rawtext[metaoffset:]
205 blobtext = rawtext[metaoffset:]
206 else:
206 else:
207 blobtext = rawtext
207 blobtext = rawtext
208 data = self._createfileblob(
208 data = self._createfileblob(
209 blobtext, meta, flags, p1, p2, node, linknode
209 blobtext, meta, flags, p1, p2, node, linknode
210 )
210 )
211 self.repo.contentstore.addremotefilelognode(self.filename, node, data)
211 self.repo.contentstore.addremotefilelognode(self.filename, node, data)
212
212
213 return node
213 return node
214
214
215 def renamed(self, node):
215 def renamed(self, node):
216 ancestors = self.repo.metadatastore.getancestors(self.filename, node)
216 ancestors = self.repo.metadatastore.getancestors(self.filename, node)
217 p1, p2, linknode, copyfrom = ancestors[node]
217 p1, p2, linknode, copyfrom = ancestors[node]
218 if copyfrom:
218 if copyfrom:
219 return (copyfrom, p1)
219 return (copyfrom, p1)
220
220
221 return False
221 return False
222
222
223 def size(self, node):
223 def size(self, node):
224 """return the size of a given revision"""
224 """return the size of a given revision"""
225 return len(self.read(node))
225 return len(self.read(node))
226
226
227 rawsize = size
227 rawsize = size
228
228
229 def cmp(self, node, text):
229 def cmp(self, node, text):
230 """compare text with a given file revision
230 """compare text with a given file revision
231
231
232 returns True if text is different than what is stored.
232 returns True if text is different than what is stored.
233 """
233 """
234
234
235 if node == self.repo.nullid:
235 if node == self.repo.nullid:
236 return True
236 return True
237
237
238 nodetext = self.read(node)
238 nodetext = self.read(node)
239 return nodetext != text
239 return nodetext != text
240
240
241 def __nonzero__(self):
241 def __nonzero__(self):
242 return True
242 return True
243
243
244 __bool__ = __nonzero__
244 __bool__ = __nonzero__
245
245
246 def __len__(self):
246 def __len__(self):
247 if self.filename == b'.hgtags':
247 if self.filename in (b'.hgtags', b'.hgsub', b'.hgsubstate'):
248 # The length of .hgtags is used to fast path tag checking.
248 # Global tag and subrepository support require access to the
249 # remotefilelog doesn't support .hgtags since the entire .hgtags
249 # file history for various performance sensitive operations.
250 # history is needed. Use the excludepattern setting to make
250 # excludepattern should be used for repositories depending on
251 # .hgtags a normal filelog.
251 # those features to fallback to regular filelog.
252 return 0
252 return 0
253
253
254 raise RuntimeError(b"len not supported")
254 raise RuntimeError(b"len not supported")
255
255
256 def heads(self):
256 def heads(self):
257 # Fake heads of the filelog to satisfy hgweb.
257 # Fake heads of the filelog to satisfy hgweb.
258 return []
258 return []
259
259
260 def empty(self):
260 def empty(self):
261 return False
261 return False
262
262
263 def flags(self, node):
263 def flags(self, node):
264 if isinstance(node, int):
264 if isinstance(node, int):
265 raise error.ProgrammingError(
265 raise error.ProgrammingError(
266 b'remotefilelog does not accept integer rev for flags'
266 b'remotefilelog does not accept integer rev for flags'
267 )
267 )
268 store = self.repo.contentstore
268 store = self.repo.contentstore
269 return store.getmeta(self.filename, node).get(constants.METAKEYFLAG, 0)
269 return store.getmeta(self.filename, node).get(constants.METAKEYFLAG, 0)
270
270
271 def parents(self, node):
271 def parents(self, node):
272 if node == self.repo.nullid:
272 if node == self.repo.nullid:
273 return self.repo.nullid, self.repo.nullid
273 return self.repo.nullid, self.repo.nullid
274
274
275 ancestormap = self.repo.metadatastore.getancestors(self.filename, node)
275 ancestormap = self.repo.metadatastore.getancestors(self.filename, node)
276 p1, p2, linknode, copyfrom = ancestormap[node]
276 p1, p2, linknode, copyfrom = ancestormap[node]
277 if copyfrom:
277 if copyfrom:
278 p1 = self.repo.nullid
278 p1 = self.repo.nullid
279
279
280 return p1, p2
280 return p1, p2
281
281
282 def parentrevs(self, rev):
282 def parentrevs(self, rev):
283 # TODO(augie): this is a node and should be a rev, but for now
283 # TODO(augie): this is a node and should be a rev, but for now
284 # nothing in core seems to actually break.
284 # nothing in core seems to actually break.
285 return self.parents(rev)
285 return self.parents(rev)
286
286
287 def linknode(self, node):
287 def linknode(self, node):
288 ancestormap = self.repo.metadatastore.getancestors(self.filename, node)
288 ancestormap = self.repo.metadatastore.getancestors(self.filename, node)
289 p1, p2, linknode, copyfrom = ancestormap[node]
289 p1, p2, linknode, copyfrom = ancestormap[node]
290 return linknode
290 return linknode
291
291
292 def linkrev(self, node):
292 def linkrev(self, node):
293 return self.repo.unfiltered().changelog.rev(self.linknode(node))
293 return self.repo.unfiltered().changelog.rev(self.linknode(node))
294
294
295 def emitrevisions(
295 def emitrevisions(
296 self,
296 self,
297 nodes,
297 nodes,
298 nodesorder=None,
298 nodesorder=None,
299 revisiondata=False,
299 revisiondata=False,
300 assumehaveparentrevisions=False,
300 assumehaveparentrevisions=False,
301 deltaprevious=False,
301 deltaprevious=False,
302 deltamode=None,
302 deltamode=None,
303 sidedata_helpers=None,
303 sidedata_helpers=None,
304 ):
304 ):
305 # we don't use any of these parameters here
305 # we don't use any of these parameters here
306 del nodesorder, revisiondata, assumehaveparentrevisions, deltaprevious
306 del nodesorder, revisiondata, assumehaveparentrevisions, deltaprevious
307 del deltamode
307 del deltamode
308 prevnode = None
308 prevnode = None
309 for node in nodes:
309 for node in nodes:
310 p1, p2 = self.parents(node)
310 p1, p2 = self.parents(node)
311 if prevnode is None:
311 if prevnode is None:
312 basenode = prevnode = p1
312 basenode = prevnode = p1
313 if basenode == node:
313 if basenode == node:
314 basenode = self.repo.nullid
314 basenode = self.repo.nullid
315 if basenode != self.repo.nullid:
315 if basenode != self.repo.nullid:
316 revision = None
316 revision = None
317 delta = self.revdiff(basenode, node)
317 delta = self.revdiff(basenode, node)
318 else:
318 else:
319 revision = self.rawdata(node)
319 revision = self.rawdata(node)
320 delta = None
320 delta = None
321 yield revlog.revlogrevisiondelta(
321 yield revlog.revlogrevisiondelta(
322 node=node,
322 node=node,
323 p1node=p1,
323 p1node=p1,
324 p2node=p2,
324 p2node=p2,
325 linknode=self.linknode(node),
325 linknode=self.linknode(node),
326 basenode=basenode,
326 basenode=basenode,
327 flags=self.flags(node),
327 flags=self.flags(node),
328 baserevisionsize=None,
328 baserevisionsize=None,
329 revision=revision,
329 revision=revision,
330 delta=delta,
330 delta=delta,
331 # Sidedata is not supported yet
331 # Sidedata is not supported yet
332 sidedata=None,
332 sidedata=None,
333 # Protocol flags are not used yet
333 # Protocol flags are not used yet
334 protocol_flags=0,
334 protocol_flags=0,
335 )
335 )
336
336
337 def revdiff(self, node1, node2):
337 def revdiff(self, node1, node2):
338 return mdiff.textdiff(self.rawdata(node1), self.rawdata(node2))
338 return mdiff.textdiff(self.rawdata(node1), self.rawdata(node2))
339
339
340 def lookup(self, node):
340 def lookup(self, node):
341 if len(node) == 40:
341 if len(node) == 40:
342 node = bin(node)
342 node = bin(node)
343 if len(node) != 20:
343 if len(node) != 20:
344 raise error.LookupError(
344 raise error.LookupError(
345 node, self.filename, _(b'invalid lookup input')
345 node, self.filename, _(b'invalid lookup input')
346 )
346 )
347
347
348 return node
348 return node
349
349
350 def rev(self, node):
350 def rev(self, node):
351 # This is a hack to make TortoiseHG work.
351 # This is a hack to make TortoiseHG work.
352 return node
352 return node
353
353
354 def node(self, rev):
354 def node(self, rev):
355 # This is a hack.
355 # This is a hack.
356 if isinstance(rev, int):
356 if isinstance(rev, int):
357 raise error.ProgrammingError(
357 raise error.ProgrammingError(
358 b'remotefilelog does not convert integer rev to node'
358 b'remotefilelog does not convert integer rev to node'
359 )
359 )
360 return rev
360 return rev
361
361
362 def revision(self, node, raw=False):
362 def revision(self, node, raw=False):
363 """returns the revlog contents at this node.
363 """returns the revlog contents at this node.
364 this includes the meta data traditionally included in file revlogs.
364 this includes the meta data traditionally included in file revlogs.
365 this is generally only used for bundling and communicating with vanilla
365 this is generally only used for bundling and communicating with vanilla
366 hg clients.
366 hg clients.
367 """
367 """
368 if node == self.repo.nullid:
368 if node == self.repo.nullid:
369 return b""
369 return b""
370 if len(node) != 20:
370 if len(node) != 20:
371 raise error.LookupError(
371 raise error.LookupError(
372 node, self.filename, _(b'invalid revision input')
372 node, self.filename, _(b'invalid revision input')
373 )
373 )
374 if (
374 if (
375 node == self.repo.nodeconstants.wdirid
375 node == self.repo.nodeconstants.wdirid
376 or node in self.repo.nodeconstants.wdirfilenodeids
376 or node in self.repo.nodeconstants.wdirfilenodeids
377 ):
377 ):
378 raise error.WdirUnsupported
378 raise error.WdirUnsupported
379
379
380 store = self.repo.contentstore
380 store = self.repo.contentstore
381 rawtext = store.get(self.filename, node)
381 rawtext = store.get(self.filename, node)
382 if raw:
382 if raw:
383 return rawtext
383 return rawtext
384 flags = store.getmeta(self.filename, node).get(constants.METAKEYFLAG, 0)
384 flags = store.getmeta(self.filename, node).get(constants.METAKEYFLAG, 0)
385 if flags == 0:
385 if flags == 0:
386 return rawtext
386 return rawtext
387 return flagutil.processflagsread(self, rawtext, flags)[0]
387 return flagutil.processflagsread(self, rawtext, flags)[0]
388
388
389 def rawdata(self, node):
389 def rawdata(self, node):
390 return self.revision(node, raw=False)
390 return self.revision(node, raw=False)
391
391
392 def _read(self, id):
392 def _read(self, id):
393 """reads the raw file blob from disk, cache, or server"""
393 """reads the raw file blob from disk, cache, or server"""
394 fileservice = self.repo.fileservice
394 fileservice = self.repo.fileservice
395 localcache = fileservice.localcache
395 localcache = fileservice.localcache
396 cachekey = fileserverclient.getcachekey(
396 cachekey = fileserverclient.getcachekey(
397 self.repo.name, self.filename, id
397 self.repo.name, self.filename, id
398 )
398 )
399 try:
399 try:
400 return localcache.read(cachekey)
400 return localcache.read(cachekey)
401 except KeyError:
401 except KeyError:
402 pass
402 pass
403
403
404 localkey = fileserverclient.getlocalkey(self.filename, id)
404 localkey = fileserverclient.getlocalkey(self.filename, id)
405 localpath = os.path.join(self.localpath, localkey)
405 localpath = os.path.join(self.localpath, localkey)
406 try:
406 try:
407 return shallowutil.readfile(localpath)
407 return shallowutil.readfile(localpath)
408 except IOError:
408 except IOError:
409 pass
409 pass
410
410
411 fileservice.prefetch([(self.filename, id)])
411 fileservice.prefetch([(self.filename, id)])
412 try:
412 try:
413 return localcache.read(cachekey)
413 return localcache.read(cachekey)
414 except KeyError:
414 except KeyError:
415 pass
415 pass
416
416
417 raise error.LookupError(id, self.filename, _(b'no node'))
417 raise error.LookupError(id, self.filename, _(b'no node'))
418
418
419 def ancestormap(self, node):
419 def ancestormap(self, node):
420 return self.repo.metadatastore.getancestors(self.filename, node)
420 return self.repo.metadatastore.getancestors(self.filename, node)
421
421
422 def ancestor(self, a, b):
422 def ancestor(self, a, b):
423 if a == self.repo.nullid or b == self.repo.nullid:
423 if a == self.repo.nullid or b == self.repo.nullid:
424 return self.repo.nullid
424 return self.repo.nullid
425
425
426 revmap, parentfunc = self._buildrevgraph(a, b)
426 revmap, parentfunc = self._buildrevgraph(a, b)
427 nodemap = {v: k for (k, v) in pycompat.iteritems(revmap)}
427 nodemap = {v: k for (k, v) in pycompat.iteritems(revmap)}
428
428
429 ancs = ancestor.ancestors(parentfunc, revmap[a], revmap[b])
429 ancs = ancestor.ancestors(parentfunc, revmap[a], revmap[b])
430 if ancs:
430 if ancs:
431 # choose a consistent winner when there's a tie
431 # choose a consistent winner when there's a tie
432 return min(map(nodemap.__getitem__, ancs))
432 return min(map(nodemap.__getitem__, ancs))
433 return self.repo.nullid
433 return self.repo.nullid
434
434
435 def commonancestorsheads(self, a, b):
435 def commonancestorsheads(self, a, b):
436 """calculate all the heads of the common ancestors of nodes a and b"""
436 """calculate all the heads of the common ancestors of nodes a and b"""
437
437
438 if a == self.repo.nullid or b == self.repo.nullid:
438 if a == self.repo.nullid or b == self.repo.nullid:
439 return self.repo.nullid
439 return self.repo.nullid
440
440
441 revmap, parentfunc = self._buildrevgraph(a, b)
441 revmap, parentfunc = self._buildrevgraph(a, b)
442 nodemap = {v: k for (k, v) in pycompat.iteritems(revmap)}
442 nodemap = {v: k for (k, v) in pycompat.iteritems(revmap)}
443
443
444 ancs = ancestor.commonancestorsheads(parentfunc, revmap[a], revmap[b])
444 ancs = ancestor.commonancestorsheads(parentfunc, revmap[a], revmap[b])
445 return map(nodemap.__getitem__, ancs)
445 return map(nodemap.__getitem__, ancs)
446
446
447 def _buildrevgraph(self, a, b):
447 def _buildrevgraph(self, a, b):
448 """Builds a numeric revision graph for the given two nodes.
448 """Builds a numeric revision graph for the given two nodes.
449 Returns a node->rev map and a rev->[revs] parent function.
449 Returns a node->rev map and a rev->[revs] parent function.
450 """
450 """
451 amap = self.ancestormap(a)
451 amap = self.ancestormap(a)
452 bmap = self.ancestormap(b)
452 bmap = self.ancestormap(b)
453
453
454 # Union the two maps
454 # Union the two maps
455 parentsmap = collections.defaultdict(list)
455 parentsmap = collections.defaultdict(list)
456 allparents = set()
456 allparents = set()
457 for mapping in (amap, bmap):
457 for mapping in (amap, bmap):
458 for node, pdata in pycompat.iteritems(mapping):
458 for node, pdata in pycompat.iteritems(mapping):
459 parents = parentsmap[node]
459 parents = parentsmap[node]
460 p1, p2, linknode, copyfrom = pdata
460 p1, p2, linknode, copyfrom = pdata
461 # Don't follow renames (copyfrom).
461 # Don't follow renames (copyfrom).
462 # remotefilectx.ancestor does that.
462 # remotefilectx.ancestor does that.
463 if p1 != self.repo.nullid and not copyfrom:
463 if p1 != self.repo.nullid and not copyfrom:
464 parents.append(p1)
464 parents.append(p1)
465 allparents.add(p1)
465 allparents.add(p1)
466 if p2 != self.repo.nullid:
466 if p2 != self.repo.nullid:
467 parents.append(p2)
467 parents.append(p2)
468 allparents.add(p2)
468 allparents.add(p2)
469
469
470 # Breadth first traversal to build linkrev graph
470 # Breadth first traversal to build linkrev graph
471 parentrevs = collections.defaultdict(list)
471 parentrevs = collections.defaultdict(list)
472 revmap = {}
472 revmap = {}
473 queue = collections.deque(
473 queue = collections.deque(
474 ((None, n) for n in parentsmap if n not in allparents)
474 ((None, n) for n in parentsmap if n not in allparents)
475 )
475 )
476 while queue:
476 while queue:
477 prevrev, current = queue.pop()
477 prevrev, current = queue.pop()
478 if current in revmap:
478 if current in revmap:
479 if prevrev:
479 if prevrev:
480 parentrevs[prevrev].append(revmap[current])
480 parentrevs[prevrev].append(revmap[current])
481 continue
481 continue
482
482
483 # Assign linkrevs in reverse order, so start at
483 # Assign linkrevs in reverse order, so start at
484 # len(parentsmap) and work backwards.
484 # len(parentsmap) and work backwards.
485 currentrev = len(parentsmap) - len(revmap) - 1
485 currentrev = len(parentsmap) - len(revmap) - 1
486 revmap[current] = currentrev
486 revmap[current] = currentrev
487
487
488 if prevrev:
488 if prevrev:
489 parentrevs[prevrev].append(currentrev)
489 parentrevs[prevrev].append(currentrev)
490
490
491 for parent in parentsmap.get(current):
491 for parent in parentsmap.get(current):
492 queue.appendleft((currentrev, parent))
492 queue.appendleft((currentrev, parent))
493
493
494 return revmap, parentrevs.__getitem__
494 return revmap, parentrevs.__getitem__
495
495
496 def strip(self, minlink, transaction):
496 def strip(self, minlink, transaction):
497 pass
497 pass
498
498
499 # misc unused things
499 # misc unused things
500 def files(self):
500 def files(self):
501 return []
501 return []
502
502
503 def checksize(self):
503 def checksize(self):
504 return 0, 0
504 return 0, 0
@@ -1,2829 +1,2837 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import collections
10 import collections
11 import weakref
11 import weakref
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullrev,
16 nullrev,
17 )
17 )
18 from . import (
18 from . import (
19 bookmarks as bookmod,
19 bookmarks as bookmod,
20 bundle2,
20 bundle2,
21 bundlecaches,
21 bundlecaches,
22 changegroup,
22 changegroup,
23 discovery,
23 discovery,
24 error,
24 error,
25 lock as lockmod,
25 lock as lockmod,
26 logexchange,
26 logexchange,
27 narrowspec,
27 narrowspec,
28 obsolete,
28 obsolete,
29 obsutil,
29 obsutil,
30 phases,
30 phases,
31 pushkey,
31 pushkey,
32 pycompat,
32 pycompat,
33 requirements,
33 requirements,
34 scmutil,
34 scmutil,
35 streamclone,
35 streamclone,
36 url as urlmod,
36 url as urlmod,
37 util,
37 util,
38 wireprototypes,
38 wireprototypes,
39 )
39 )
40 from .utils import (
40 from .utils import (
41 hashutil,
41 hashutil,
42 stringutil,
42 stringutil,
43 urlutil,
43 urlutil,
44 )
44 )
45 from .interfaces import repository
45 from .interfaces import repository
46
46
47 urlerr = util.urlerr
47 urlerr = util.urlerr
48 urlreq = util.urlreq
48 urlreq = util.urlreq
49
49
50 _NARROWACL_SECTION = b'narrowacl'
50 _NARROWACL_SECTION = b'narrowacl'
51
51
52
52
53 def readbundle(ui, fh, fname, vfs=None):
53 def readbundle(ui, fh, fname, vfs=None):
54 header = changegroup.readexactly(fh, 4)
54 header = changegroup.readexactly(fh, 4)
55
55
56 alg = None
56 alg = None
57 if not fname:
57 if not fname:
58 fname = b"stream"
58 fname = b"stream"
59 if not header.startswith(b'HG') and header.startswith(b'\0'):
59 if not header.startswith(b'HG') and header.startswith(b'\0'):
60 fh = changegroup.headerlessfixup(fh, header)
60 fh = changegroup.headerlessfixup(fh, header)
61 header = b"HG10"
61 header = b"HG10"
62 alg = b'UN'
62 alg = b'UN'
63 elif vfs:
63 elif vfs:
64 fname = vfs.join(fname)
64 fname = vfs.join(fname)
65
65
66 magic, version = header[0:2], header[2:4]
66 magic, version = header[0:2], header[2:4]
67
67
68 if magic != b'HG':
68 if magic != b'HG':
69 raise error.Abort(_(b'%s: not a Mercurial bundle') % fname)
69 raise error.Abort(_(b'%s: not a Mercurial bundle') % fname)
70 if version == b'10':
70 if version == b'10':
71 if alg is None:
71 if alg is None:
72 alg = changegroup.readexactly(fh, 2)
72 alg = changegroup.readexactly(fh, 2)
73 return changegroup.cg1unpacker(fh, alg)
73 return changegroup.cg1unpacker(fh, alg)
74 elif version.startswith(b'2'):
74 elif version.startswith(b'2'):
75 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
75 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
76 elif version == b'S1':
76 elif version == b'S1':
77 return streamclone.streamcloneapplier(fh)
77 return streamclone.streamcloneapplier(fh)
78 else:
78 else:
79 raise error.Abort(
79 raise error.Abort(
80 _(b'%s: unknown bundle version %s') % (fname, version)
80 _(b'%s: unknown bundle version %s') % (fname, version)
81 )
81 )
82
82
83
83
84 def getbundlespec(ui, fh):
84 def getbundlespec(ui, fh):
85 """Infer the bundlespec from a bundle file handle.
85 """Infer the bundlespec from a bundle file handle.
86
86
87 The input file handle is seeked and the original seek position is not
87 The input file handle is seeked and the original seek position is not
88 restored.
88 restored.
89 """
89 """
90
90
91 def speccompression(alg):
91 def speccompression(alg):
92 try:
92 try:
93 return util.compengines.forbundletype(alg).bundletype()[0]
93 return util.compengines.forbundletype(alg).bundletype()[0]
94 except KeyError:
94 except KeyError:
95 return None
95 return None
96
96
97 b = readbundle(ui, fh, None)
97 b = readbundle(ui, fh, None)
98 if isinstance(b, changegroup.cg1unpacker):
98 if isinstance(b, changegroup.cg1unpacker):
99 alg = b._type
99 alg = b._type
100 if alg == b'_truncatedBZ':
100 if alg == b'_truncatedBZ':
101 alg = b'BZ'
101 alg = b'BZ'
102 comp = speccompression(alg)
102 comp = speccompression(alg)
103 if not comp:
103 if not comp:
104 raise error.Abort(_(b'unknown compression algorithm: %s') % alg)
104 raise error.Abort(_(b'unknown compression algorithm: %s') % alg)
105 return b'%s-v1' % comp
105 return b'%s-v1' % comp
106 elif isinstance(b, bundle2.unbundle20):
106 elif isinstance(b, bundle2.unbundle20):
107 if b'Compression' in b.params:
107 if b'Compression' in b.params:
108 comp = speccompression(b.params[b'Compression'])
108 comp = speccompression(b.params[b'Compression'])
109 if not comp:
109 if not comp:
110 raise error.Abort(
110 raise error.Abort(
111 _(b'unknown compression algorithm: %s') % comp
111 _(b'unknown compression algorithm: %s') % comp
112 )
112 )
113 else:
113 else:
114 comp = b'none'
114 comp = b'none'
115
115
116 version = None
116 version = None
117 for part in b.iterparts():
117 for part in b.iterparts():
118 if part.type == b'changegroup':
118 if part.type == b'changegroup':
119 version = part.params[b'version']
119 version = part.params[b'version']
120 if version in (b'01', b'02'):
120 if version in (b'01', b'02'):
121 version = b'v2'
121 version = b'v2'
122 else:
122 else:
123 raise error.Abort(
123 raise error.Abort(
124 _(
124 _(
125 b'changegroup version %s does not have '
125 b'changegroup version %s does not have '
126 b'a known bundlespec'
126 b'a known bundlespec'
127 )
127 )
128 % version,
128 % version,
129 hint=_(b'try upgrading your Mercurial client'),
129 hint=_(b'try upgrading your Mercurial client'),
130 )
130 )
131 elif part.type == b'stream2' and version is None:
131 elif part.type == b'stream2' and version is None:
132 # A stream2 part requires to be part of a v2 bundle
132 # A stream2 part requires to be part of a v2 bundle
133 requirements = urlreq.unquote(part.params[b'requirements'])
133 requirements = urlreq.unquote(part.params[b'requirements'])
134 splitted = requirements.split()
134 splitted = requirements.split()
135 params = bundle2._formatrequirementsparams(splitted)
135 params = bundle2._formatrequirementsparams(splitted)
136 return b'none-v2;stream=v2;%s' % params
136 return b'none-v2;stream=v2;%s' % params
137
137
138 if not version:
138 if not version:
139 raise error.Abort(
139 raise error.Abort(
140 _(b'could not identify changegroup version in bundle')
140 _(b'could not identify changegroup version in bundle')
141 )
141 )
142
142
143 return b'%s-%s' % (comp, version)
143 return b'%s-%s' % (comp, version)
144 elif isinstance(b, streamclone.streamcloneapplier):
144 elif isinstance(b, streamclone.streamcloneapplier):
145 requirements = streamclone.readbundle1header(fh)[2]
145 requirements = streamclone.readbundle1header(fh)[2]
146 formatted = bundle2._formatrequirementsparams(requirements)
146 formatted = bundle2._formatrequirementsparams(requirements)
147 return b'none-packed1;%s' % formatted
147 return b'none-packed1;%s' % formatted
148 else:
148 else:
149 raise error.Abort(_(b'unknown bundle type: %s') % b)
149 raise error.Abort(_(b'unknown bundle type: %s') % b)
150
150
151
151
152 def _computeoutgoing(repo, heads, common):
152 def _computeoutgoing(repo, heads, common):
153 """Computes which revs are outgoing given a set of common
153 """Computes which revs are outgoing given a set of common
154 and a set of heads.
154 and a set of heads.
155
155
156 This is a separate function so extensions can have access to
156 This is a separate function so extensions can have access to
157 the logic.
157 the logic.
158
158
159 Returns a discovery.outgoing object.
159 Returns a discovery.outgoing object.
160 """
160 """
161 cl = repo.changelog
161 cl = repo.changelog
162 if common:
162 if common:
163 hasnode = cl.hasnode
163 hasnode = cl.hasnode
164 common = [n for n in common if hasnode(n)]
164 common = [n for n in common if hasnode(n)]
165 else:
165 else:
166 common = [repo.nullid]
166 common = [repo.nullid]
167 if not heads:
167 if not heads:
168 heads = cl.heads()
168 heads = cl.heads()
169 return discovery.outgoing(repo, common, heads)
169 return discovery.outgoing(repo, common, heads)
170
170
171
171
172 def _checkpublish(pushop):
172 def _checkpublish(pushop):
173 repo = pushop.repo
173 repo = pushop.repo
174 ui = repo.ui
174 ui = repo.ui
175 behavior = ui.config(b'experimental', b'auto-publish')
175 behavior = ui.config(b'experimental', b'auto-publish')
176 if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'):
176 if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'):
177 return
177 return
178 remotephases = listkeys(pushop.remote, b'phases')
178 remotephases = listkeys(pushop.remote, b'phases')
179 if not remotephases.get(b'publishing', False):
179 if not remotephases.get(b'publishing', False):
180 return
180 return
181
181
182 if pushop.revs is None:
182 if pushop.revs is None:
183 published = repo.filtered(b'served').revs(b'not public()')
183 published = repo.filtered(b'served').revs(b'not public()')
184 else:
184 else:
185 published = repo.revs(b'::%ln - public()', pushop.revs)
185 published = repo.revs(b'::%ln - public()', pushop.revs)
186 # we want to use pushop.revs in the revset even if they themselves are
186 # we want to use pushop.revs in the revset even if they themselves are
187 # secret, but we don't want to have anything that the server won't see
187 # secret, but we don't want to have anything that the server won't see
188 # in the result of this expression
188 # in the result of this expression
189 published &= repo.filtered(b'served')
189 published &= repo.filtered(b'served')
190 if published:
190 if published:
191 if behavior == b'warn':
191 if behavior == b'warn':
192 ui.warn(
192 ui.warn(
193 _(b'%i changesets about to be published\n') % len(published)
193 _(b'%i changesets about to be published\n') % len(published)
194 )
194 )
195 elif behavior == b'confirm':
195 elif behavior == b'confirm':
196 if ui.promptchoice(
196 if ui.promptchoice(
197 _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No')
197 _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No')
198 % len(published)
198 % len(published)
199 ):
199 ):
200 raise error.CanceledError(_(b'user quit'))
200 raise error.CanceledError(_(b'user quit'))
201 elif behavior == b'abort':
201 elif behavior == b'abort':
202 msg = _(b'push would publish %i changesets') % len(published)
202 msg = _(b'push would publish %i changesets') % len(published)
203 hint = _(
203 hint = _(
204 b"use --publish or adjust 'experimental.auto-publish'"
204 b"use --publish or adjust 'experimental.auto-publish'"
205 b" config"
205 b" config"
206 )
206 )
207 raise error.Abort(msg, hint=hint)
207 raise error.Abort(msg, hint=hint)
208
208
209
209
210 def _forcebundle1(op):
210 def _forcebundle1(op):
211 """return true if a pull/push must use bundle1
211 """return true if a pull/push must use bundle1
212
212
213 This function is used to allow testing of the older bundle version"""
213 This function is used to allow testing of the older bundle version"""
214 ui = op.repo.ui
214 ui = op.repo.ui
215 # The goal is this config is to allow developer to choose the bundle
215 # The goal is this config is to allow developer to choose the bundle
216 # version used during exchanged. This is especially handy during test.
216 # version used during exchanged. This is especially handy during test.
217 # Value is a list of bundle version to be picked from, highest version
217 # Value is a list of bundle version to be picked from, highest version
218 # should be used.
218 # should be used.
219 #
219 #
220 # developer config: devel.legacy.exchange
220 # developer config: devel.legacy.exchange
221 exchange = ui.configlist(b'devel', b'legacy.exchange')
221 exchange = ui.configlist(b'devel', b'legacy.exchange')
222 forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange
222 forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange
223 return forcebundle1 or not op.remote.capable(b'bundle2')
223 return forcebundle1 or not op.remote.capable(b'bundle2')
224
224
225
225
226 class pushoperation(object):
226 class pushoperation(object):
227 """A object that represent a single push operation
227 """A object that represent a single push operation
228
228
229 Its purpose is to carry push related state and very common operations.
229 Its purpose is to carry push related state and very common operations.
230
230
231 A new pushoperation should be created at the beginning of each push and
231 A new pushoperation should be created at the beginning of each push and
232 discarded afterward.
232 discarded afterward.
233 """
233 """
234
234
235 def __init__(
235 def __init__(
236 self,
236 self,
237 repo,
237 repo,
238 remote,
238 remote,
239 force=False,
239 force=False,
240 revs=None,
240 revs=None,
241 newbranch=False,
241 newbranch=False,
242 bookmarks=(),
242 bookmarks=(),
243 publish=False,
243 publish=False,
244 pushvars=None,
244 pushvars=None,
245 ):
245 ):
246 # repo we push from
246 # repo we push from
247 self.repo = repo
247 self.repo = repo
248 self.ui = repo.ui
248 self.ui = repo.ui
249 # repo we push to
249 # repo we push to
250 self.remote = remote
250 self.remote = remote
251 # force option provided
251 # force option provided
252 self.force = force
252 self.force = force
253 # revs to be pushed (None is "all")
253 # revs to be pushed (None is "all")
254 self.revs = revs
254 self.revs = revs
255 # bookmark explicitly pushed
255 # bookmark explicitly pushed
256 self.bookmarks = bookmarks
256 self.bookmarks = bookmarks
257 # allow push of new branch
257 # allow push of new branch
258 self.newbranch = newbranch
258 self.newbranch = newbranch
259 # step already performed
259 # step already performed
260 # (used to check what steps have been already performed through bundle2)
260 # (used to check what steps have been already performed through bundle2)
261 self.stepsdone = set()
261 self.stepsdone = set()
262 # Integer version of the changegroup push result
262 # Integer version of the changegroup push result
263 # - None means nothing to push
263 # - None means nothing to push
264 # - 0 means HTTP error
264 # - 0 means HTTP error
265 # - 1 means we pushed and remote head count is unchanged *or*
265 # - 1 means we pushed and remote head count is unchanged *or*
266 # we have outgoing changesets but refused to push
266 # we have outgoing changesets but refused to push
267 # - other values as described by addchangegroup()
267 # - other values as described by addchangegroup()
268 self.cgresult = None
268 self.cgresult = None
269 # Boolean value for the bookmark push
269 # Boolean value for the bookmark push
270 self.bkresult = None
270 self.bkresult = None
271 # discover.outgoing object (contains common and outgoing data)
271 # discover.outgoing object (contains common and outgoing data)
272 self.outgoing = None
272 self.outgoing = None
273 # all remote topological heads before the push
273 # all remote topological heads before the push
274 self.remoteheads = None
274 self.remoteheads = None
275 # Details of the remote branch pre and post push
275 # Details of the remote branch pre and post push
276 #
276 #
277 # mapping: {'branch': ([remoteheads],
277 # mapping: {'branch': ([remoteheads],
278 # [newheads],
278 # [newheads],
279 # [unsyncedheads],
279 # [unsyncedheads],
280 # [discardedheads])}
280 # [discardedheads])}
281 # - branch: the branch name
281 # - branch: the branch name
282 # - remoteheads: the list of remote heads known locally
282 # - remoteheads: the list of remote heads known locally
283 # None if the branch is new
283 # None if the branch is new
284 # - newheads: the new remote heads (known locally) with outgoing pushed
284 # - newheads: the new remote heads (known locally) with outgoing pushed
285 # - unsyncedheads: the list of remote heads unknown locally.
285 # - unsyncedheads: the list of remote heads unknown locally.
286 # - discardedheads: the list of remote heads made obsolete by the push
286 # - discardedheads: the list of remote heads made obsolete by the push
287 self.pushbranchmap = None
287 self.pushbranchmap = None
288 # testable as a boolean indicating if any nodes are missing locally.
288 # testable as a boolean indicating if any nodes are missing locally.
289 self.incoming = None
289 self.incoming = None
290 # summary of the remote phase situation
290 # summary of the remote phase situation
291 self.remotephases = None
291 self.remotephases = None
292 # phases changes that must be pushed along side the changesets
292 # phases changes that must be pushed along side the changesets
293 self.outdatedphases = None
293 self.outdatedphases = None
294 # phases changes that must be pushed if changeset push fails
294 # phases changes that must be pushed if changeset push fails
295 self.fallbackoutdatedphases = None
295 self.fallbackoutdatedphases = None
296 # outgoing obsmarkers
296 # outgoing obsmarkers
297 self.outobsmarkers = set()
297 self.outobsmarkers = set()
298 # outgoing bookmarks, list of (bm, oldnode | '', newnode | '')
298 # outgoing bookmarks, list of (bm, oldnode | '', newnode | '')
299 self.outbookmarks = []
299 self.outbookmarks = []
300 # transaction manager
300 # transaction manager
301 self.trmanager = None
301 self.trmanager = None
302 # map { pushkey partid -> callback handling failure}
302 # map { pushkey partid -> callback handling failure}
303 # used to handle exception from mandatory pushkey part failure
303 # used to handle exception from mandatory pushkey part failure
304 self.pkfailcb = {}
304 self.pkfailcb = {}
305 # an iterable of pushvars or None
305 # an iterable of pushvars or None
306 self.pushvars = pushvars
306 self.pushvars = pushvars
307 # publish pushed changesets
307 # publish pushed changesets
308 self.publish = publish
308 self.publish = publish
309
309
310 @util.propertycache
310 @util.propertycache
311 def futureheads(self):
311 def futureheads(self):
312 """future remote heads if the changeset push succeeds"""
312 """future remote heads if the changeset push succeeds"""
313 return self.outgoing.ancestorsof
313 return self.outgoing.ancestorsof
314
314
315 @util.propertycache
315 @util.propertycache
316 def fallbackheads(self):
316 def fallbackheads(self):
317 """future remote heads if the changeset push fails"""
317 """future remote heads if the changeset push fails"""
318 if self.revs is None:
318 if self.revs is None:
319 # not target to push, all common are relevant
319 # not target to push, all common are relevant
320 return self.outgoing.commonheads
320 return self.outgoing.commonheads
321 unfi = self.repo.unfiltered()
321 unfi = self.repo.unfiltered()
322 # I want cheads = heads(::ancestorsof and ::commonheads)
322 # I want cheads = heads(::ancestorsof and ::commonheads)
323 # (ancestorsof is revs with secret changeset filtered out)
323 # (ancestorsof is revs with secret changeset filtered out)
324 #
324 #
325 # This can be expressed as:
325 # This can be expressed as:
326 # cheads = ( (ancestorsof and ::commonheads)
326 # cheads = ( (ancestorsof and ::commonheads)
327 # + (commonheads and ::ancestorsof))"
327 # + (commonheads and ::ancestorsof))"
328 # )
328 # )
329 #
329 #
330 # while trying to push we already computed the following:
330 # while trying to push we already computed the following:
331 # common = (::commonheads)
331 # common = (::commonheads)
332 # missing = ((commonheads::ancestorsof) - commonheads)
332 # missing = ((commonheads::ancestorsof) - commonheads)
333 #
333 #
334 # We can pick:
334 # We can pick:
335 # * ancestorsof part of common (::commonheads)
335 # * ancestorsof part of common (::commonheads)
336 common = self.outgoing.common
336 common = self.outgoing.common
337 rev = self.repo.changelog.index.rev
337 rev = self.repo.changelog.index.rev
338 cheads = [node for node in self.revs if rev(node) in common]
338 cheads = [node for node in self.revs if rev(node) in common]
339 # and
339 # and
340 # * commonheads parents on missing
340 # * commonheads parents on missing
341 revset = unfi.set(
341 revset = unfi.set(
342 b'%ln and parents(roots(%ln))',
342 b'%ln and parents(roots(%ln))',
343 self.outgoing.commonheads,
343 self.outgoing.commonheads,
344 self.outgoing.missing,
344 self.outgoing.missing,
345 )
345 )
346 cheads.extend(c.node() for c in revset)
346 cheads.extend(c.node() for c in revset)
347 return cheads
347 return cheads
348
348
349 @property
349 @property
350 def commonheads(self):
350 def commonheads(self):
351 """set of all common heads after changeset bundle push"""
351 """set of all common heads after changeset bundle push"""
352 if self.cgresult:
352 if self.cgresult:
353 return self.futureheads
353 return self.futureheads
354 else:
354 else:
355 return self.fallbackheads
355 return self.fallbackheads
356
356
357
357
358 # mapping of message used when pushing bookmark
358 # mapping of message used when pushing bookmark
359 bookmsgmap = {
359 bookmsgmap = {
360 b'update': (
360 b'update': (
361 _(b"updating bookmark %s\n"),
361 _(b"updating bookmark %s\n"),
362 _(b'updating bookmark %s failed\n'),
362 _(b'updating bookmark %s failed\n'),
363 ),
363 ),
364 b'export': (
364 b'export': (
365 _(b"exporting bookmark %s\n"),
365 _(b"exporting bookmark %s\n"),
366 _(b'exporting bookmark %s failed\n'),
366 _(b'exporting bookmark %s failed\n'),
367 ),
367 ),
368 b'delete': (
368 b'delete': (
369 _(b"deleting remote bookmark %s\n"),
369 _(b"deleting remote bookmark %s\n"),
370 _(b'deleting remote bookmark %s failed\n'),
370 _(b'deleting remote bookmark %s failed\n'),
371 ),
371 ),
372 }
372 }
373
373
374
374
375 def push(
375 def push(
376 repo,
376 repo,
377 remote,
377 remote,
378 force=False,
378 force=False,
379 revs=None,
379 revs=None,
380 newbranch=False,
380 newbranch=False,
381 bookmarks=(),
381 bookmarks=(),
382 publish=False,
382 publish=False,
383 opargs=None,
383 opargs=None,
384 ):
384 ):
385 """Push outgoing changesets (limited by revs) from a local
385 """Push outgoing changesets (limited by revs) from a local
386 repository to remote. Return an integer:
386 repository to remote. Return an integer:
387 - None means nothing to push
387 - None means nothing to push
388 - 0 means HTTP error
388 - 0 means HTTP error
389 - 1 means we pushed and remote head count is unchanged *or*
389 - 1 means we pushed and remote head count is unchanged *or*
390 we have outgoing changesets but refused to push
390 we have outgoing changesets but refused to push
391 - other values as described by addchangegroup()
391 - other values as described by addchangegroup()
392 """
392 """
393 if opargs is None:
393 if opargs is None:
394 opargs = {}
394 opargs = {}
395 pushop = pushoperation(
395 pushop = pushoperation(
396 repo,
396 repo,
397 remote,
397 remote,
398 force,
398 force,
399 revs,
399 revs,
400 newbranch,
400 newbranch,
401 bookmarks,
401 bookmarks,
402 publish,
402 publish,
403 **pycompat.strkwargs(opargs)
403 **pycompat.strkwargs(opargs)
404 )
404 )
405 if pushop.remote.local():
405 if pushop.remote.local():
406 missing = (
406 missing = (
407 set(pushop.repo.requirements) - pushop.remote.local().supported
407 set(pushop.repo.requirements) - pushop.remote.local().supported
408 )
408 )
409 if missing:
409 if missing:
410 msg = _(
410 msg = _(
411 b"required features are not"
411 b"required features are not"
412 b" supported in the destination:"
412 b" supported in the destination:"
413 b" %s"
413 b" %s"
414 ) % (b', '.join(sorted(missing)))
414 ) % (b', '.join(sorted(missing)))
415 raise error.Abort(msg)
415 raise error.Abort(msg)
416
416
417 if not pushop.remote.canpush():
417 if not pushop.remote.canpush():
418 raise error.Abort(_(b"destination does not support push"))
418 raise error.Abort(_(b"destination does not support push"))
419
419
420 if not pushop.remote.capable(b'unbundle'):
420 if not pushop.remote.capable(b'unbundle'):
421 raise error.Abort(
421 raise error.Abort(
422 _(
422 _(
423 b'cannot push: destination does not support the '
423 b'cannot push: destination does not support the '
424 b'unbundle wire protocol command'
424 b'unbundle wire protocol command'
425 )
425 )
426 )
426 )
427 for category in sorted(bundle2.read_remote_wanted_sidedata(pushop.remote)):
427 for category in sorted(bundle2.read_remote_wanted_sidedata(pushop.remote)):
428 # Check that a computer is registered for that category for at least
428 # Check that a computer is registered for that category for at least
429 # one revlog kind.
429 # one revlog kind.
430 for kind, computers in repo._sidedata_computers.items():
430 for kind, computers in repo._sidedata_computers.items():
431 if computers.get(category):
431 if computers.get(category):
432 break
432 break
433 else:
433 else:
434 raise error.Abort(
434 raise error.Abort(
435 _(
435 _(
436 b'cannot push: required sidedata category not supported'
436 b'cannot push: required sidedata category not supported'
437 b" by this client: '%s'"
437 b" by this client: '%s'"
438 )
438 )
439 % pycompat.bytestr(category)
439 % pycompat.bytestr(category)
440 )
440 )
441 # get lock as we might write phase data
441 # get lock as we might write phase data
442 wlock = lock = None
442 wlock = lock = None
443 try:
443 try:
444 # bundle2 push may receive a reply bundle touching bookmarks
444 # bundle2 push may receive a reply bundle touching bookmarks
445 # requiring the wlock. Take it now to ensure proper ordering.
445 # requiring the wlock. Take it now to ensure proper ordering.
446 maypushback = pushop.ui.configbool(b'experimental', b'bundle2.pushback')
446 maypushback = pushop.ui.configbool(b'experimental', b'bundle2.pushback')
447 if (
447 if (
448 (not _forcebundle1(pushop))
448 (not _forcebundle1(pushop))
449 and maypushback
449 and maypushback
450 and not bookmod.bookmarksinstore(repo)
450 and not bookmod.bookmarksinstore(repo)
451 ):
451 ):
452 wlock = pushop.repo.wlock()
452 wlock = pushop.repo.wlock()
453 lock = pushop.repo.lock()
453 lock = pushop.repo.lock()
454 pushop.trmanager = transactionmanager(
454 pushop.trmanager = transactionmanager(
455 pushop.repo, b'push-response', pushop.remote.url()
455 pushop.repo, b'push-response', pushop.remote.url()
456 )
456 )
457 except error.LockUnavailable as err:
457 except error.LockUnavailable as err:
458 # source repo cannot be locked.
458 # source repo cannot be locked.
459 # We do not abort the push, but just disable the local phase
459 # We do not abort the push, but just disable the local phase
460 # synchronisation.
460 # synchronisation.
461 msg = b'cannot lock source repository: %s\n' % stringutil.forcebytestr(
461 msg = b'cannot lock source repository: %s\n' % stringutil.forcebytestr(
462 err
462 err
463 )
463 )
464 pushop.ui.debug(msg)
464 pushop.ui.debug(msg)
465
465
466 with wlock or util.nullcontextmanager():
466 with wlock or util.nullcontextmanager():
467 with lock or util.nullcontextmanager():
467 with lock or util.nullcontextmanager():
468 with pushop.trmanager or util.nullcontextmanager():
468 with pushop.trmanager or util.nullcontextmanager():
469 pushop.repo.checkpush(pushop)
469 pushop.repo.checkpush(pushop)
470 _checkpublish(pushop)
470 _checkpublish(pushop)
471 _pushdiscovery(pushop)
471 _pushdiscovery(pushop)
472 if not pushop.force:
472 if not pushop.force:
473 _checksubrepostate(pushop)
473 _checksubrepostate(pushop)
474 if not _forcebundle1(pushop):
474 if not _forcebundle1(pushop):
475 _pushbundle2(pushop)
475 _pushbundle2(pushop)
476 _pushchangeset(pushop)
476 _pushchangeset(pushop)
477 _pushsyncphase(pushop)
477 _pushsyncphase(pushop)
478 _pushobsolete(pushop)
478 _pushobsolete(pushop)
479 _pushbookmark(pushop)
479 _pushbookmark(pushop)
480
480
481 if repo.ui.configbool(b'experimental', b'remotenames'):
481 if repo.ui.configbool(b'experimental', b'remotenames'):
482 logexchange.pullremotenames(repo, remote)
482 logexchange.pullremotenames(repo, remote)
483
483
484 return pushop
484 return pushop
485
485
486
486
487 # list of steps to perform discovery before push
487 # list of steps to perform discovery before push
488 pushdiscoveryorder = []
488 pushdiscoveryorder = []
489
489
490 # Mapping between step name and function
490 # Mapping between step name and function
491 #
491 #
492 # This exists to help extensions wrap steps if necessary
492 # This exists to help extensions wrap steps if necessary
493 pushdiscoverymapping = {}
493 pushdiscoverymapping = {}
494
494
495
495
496 def pushdiscovery(stepname):
496 def pushdiscovery(stepname):
497 """decorator for function performing discovery before push
497 """decorator for function performing discovery before push
498
498
499 The function is added to the step -> function mapping and appended to the
499 The function is added to the step -> function mapping and appended to the
500 list of steps. Beware that decorated function will be added in order (this
500 list of steps. Beware that decorated function will be added in order (this
501 may matter).
501 may matter).
502
502
503 You can only use this decorator for a new step, if you want to wrap a step
503 You can only use this decorator for a new step, if you want to wrap a step
504 from an extension, change the pushdiscovery dictionary directly."""
504 from an extension, change the pushdiscovery dictionary directly."""
505
505
506 def dec(func):
506 def dec(func):
507 assert stepname not in pushdiscoverymapping
507 assert stepname not in pushdiscoverymapping
508 pushdiscoverymapping[stepname] = func
508 pushdiscoverymapping[stepname] = func
509 pushdiscoveryorder.append(stepname)
509 pushdiscoveryorder.append(stepname)
510 return func
510 return func
511
511
512 return dec
512 return dec
513
513
514
514
515 def _pushdiscovery(pushop):
515 def _pushdiscovery(pushop):
516 """Run all discovery steps"""
516 """Run all discovery steps"""
517 for stepname in pushdiscoveryorder:
517 for stepname in pushdiscoveryorder:
518 step = pushdiscoverymapping[stepname]
518 step = pushdiscoverymapping[stepname]
519 step(pushop)
519 step(pushop)
520
520
521
521
522 def _checksubrepostate(pushop):
522 def _checksubrepostate(pushop):
523 """Ensure all outgoing referenced subrepo revisions are present locally"""
523 """Ensure all outgoing referenced subrepo revisions are present locally"""
524
525 repo = pushop.repo
526
527 # If the repository does not use subrepos, skip the expensive
528 # manifest checks.
529 if not len(repo.file(b'.hgsub')) or not len(repo.file(b'.hgsubstate')):
530 return
531
524 for n in pushop.outgoing.missing:
532 for n in pushop.outgoing.missing:
525 ctx = pushop.repo[n]
533 ctx = repo[n]
526
534
527 if b'.hgsub' in ctx.manifest() and b'.hgsubstate' in ctx.files():
535 if b'.hgsub' in ctx.manifest() and b'.hgsubstate' in ctx.files():
528 for subpath in sorted(ctx.substate):
536 for subpath in sorted(ctx.substate):
529 sub = ctx.sub(subpath)
537 sub = ctx.sub(subpath)
530 sub.verify(onpush=True)
538 sub.verify(onpush=True)
531
539
532
540
533 @pushdiscovery(b'changeset')
541 @pushdiscovery(b'changeset')
534 def _pushdiscoverychangeset(pushop):
542 def _pushdiscoverychangeset(pushop):
535 """discover the changeset that need to be pushed"""
543 """discover the changeset that need to be pushed"""
536 fci = discovery.findcommonincoming
544 fci = discovery.findcommonincoming
537 if pushop.revs:
545 if pushop.revs:
538 commoninc = fci(
546 commoninc = fci(
539 pushop.repo,
547 pushop.repo,
540 pushop.remote,
548 pushop.remote,
541 force=pushop.force,
549 force=pushop.force,
542 ancestorsof=pushop.revs,
550 ancestorsof=pushop.revs,
543 )
551 )
544 else:
552 else:
545 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
553 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
546 common, inc, remoteheads = commoninc
554 common, inc, remoteheads = commoninc
547 fco = discovery.findcommonoutgoing
555 fco = discovery.findcommonoutgoing
548 outgoing = fco(
556 outgoing = fco(
549 pushop.repo,
557 pushop.repo,
550 pushop.remote,
558 pushop.remote,
551 onlyheads=pushop.revs,
559 onlyheads=pushop.revs,
552 commoninc=commoninc,
560 commoninc=commoninc,
553 force=pushop.force,
561 force=pushop.force,
554 )
562 )
555 pushop.outgoing = outgoing
563 pushop.outgoing = outgoing
556 pushop.remoteheads = remoteheads
564 pushop.remoteheads = remoteheads
557 pushop.incoming = inc
565 pushop.incoming = inc
558
566
559
567
560 @pushdiscovery(b'phase')
568 @pushdiscovery(b'phase')
561 def _pushdiscoveryphase(pushop):
569 def _pushdiscoveryphase(pushop):
562 """discover the phase that needs to be pushed
570 """discover the phase that needs to be pushed
563
571
564 (computed for both success and failure case for changesets push)"""
572 (computed for both success and failure case for changesets push)"""
565 outgoing = pushop.outgoing
573 outgoing = pushop.outgoing
566 unfi = pushop.repo.unfiltered()
574 unfi = pushop.repo.unfiltered()
567 remotephases = listkeys(pushop.remote, b'phases')
575 remotephases = listkeys(pushop.remote, b'phases')
568
576
569 if (
577 if (
570 pushop.ui.configbool(b'ui', b'_usedassubrepo')
578 pushop.ui.configbool(b'ui', b'_usedassubrepo')
571 and remotephases # server supports phases
579 and remotephases # server supports phases
572 and not pushop.outgoing.missing # no changesets to be pushed
580 and not pushop.outgoing.missing # no changesets to be pushed
573 and remotephases.get(b'publishing', False)
581 and remotephases.get(b'publishing', False)
574 ):
582 ):
575 # When:
583 # When:
576 # - this is a subrepo push
584 # - this is a subrepo push
577 # - and remote support phase
585 # - and remote support phase
578 # - and no changeset are to be pushed
586 # - and no changeset are to be pushed
579 # - and remote is publishing
587 # - and remote is publishing
580 # We may be in issue 3781 case!
588 # We may be in issue 3781 case!
581 # We drop the possible phase synchronisation done by
589 # We drop the possible phase synchronisation done by
582 # courtesy to publish changesets possibly locally draft
590 # courtesy to publish changesets possibly locally draft
583 # on the remote.
591 # on the remote.
584 pushop.outdatedphases = []
592 pushop.outdatedphases = []
585 pushop.fallbackoutdatedphases = []
593 pushop.fallbackoutdatedphases = []
586 return
594 return
587
595
588 pushop.remotephases = phases.remotephasessummary(
596 pushop.remotephases = phases.remotephasessummary(
589 pushop.repo, pushop.fallbackheads, remotephases
597 pushop.repo, pushop.fallbackheads, remotephases
590 )
598 )
591 droots = pushop.remotephases.draftroots
599 droots = pushop.remotephases.draftroots
592
600
593 extracond = b''
601 extracond = b''
594 if not pushop.remotephases.publishing:
602 if not pushop.remotephases.publishing:
595 extracond = b' and public()'
603 extracond = b' and public()'
596 revset = b'heads((%%ln::%%ln) %s)' % extracond
604 revset = b'heads((%%ln::%%ln) %s)' % extracond
597 # Get the list of all revs draft on remote by public here.
605 # Get the list of all revs draft on remote by public here.
598 # XXX Beware that revset break if droots is not strictly
606 # XXX Beware that revset break if droots is not strictly
599 # XXX root we may want to ensure it is but it is costly
607 # XXX root we may want to ensure it is but it is costly
600 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
608 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
601 if not pushop.remotephases.publishing and pushop.publish:
609 if not pushop.remotephases.publishing and pushop.publish:
602 future = list(
610 future = list(
603 unfi.set(
611 unfi.set(
604 b'%ln and (not public() or %ln::)', pushop.futureheads, droots
612 b'%ln and (not public() or %ln::)', pushop.futureheads, droots
605 )
613 )
606 )
614 )
607 elif not outgoing.missing:
615 elif not outgoing.missing:
608 future = fallback
616 future = fallback
609 else:
617 else:
610 # adds changeset we are going to push as draft
618 # adds changeset we are going to push as draft
611 #
619 #
612 # should not be necessary for publishing server, but because of an
620 # should not be necessary for publishing server, but because of an
613 # issue fixed in xxxxx we have to do it anyway.
621 # issue fixed in xxxxx we have to do it anyway.
614 fdroots = list(
622 fdroots = list(
615 unfi.set(b'roots(%ln + %ln::)', outgoing.missing, droots)
623 unfi.set(b'roots(%ln + %ln::)', outgoing.missing, droots)
616 )
624 )
617 fdroots = [f.node() for f in fdroots]
625 fdroots = [f.node() for f in fdroots]
618 future = list(unfi.set(revset, fdroots, pushop.futureheads))
626 future = list(unfi.set(revset, fdroots, pushop.futureheads))
619 pushop.outdatedphases = future
627 pushop.outdatedphases = future
620 pushop.fallbackoutdatedphases = fallback
628 pushop.fallbackoutdatedphases = fallback
621
629
622
630
623 @pushdiscovery(b'obsmarker')
631 @pushdiscovery(b'obsmarker')
624 def _pushdiscoveryobsmarkers(pushop):
632 def _pushdiscoveryobsmarkers(pushop):
625 if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt):
633 if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt):
626 return
634 return
627
635
628 if not pushop.repo.obsstore:
636 if not pushop.repo.obsstore:
629 return
637 return
630
638
631 if b'obsolete' not in listkeys(pushop.remote, b'namespaces'):
639 if b'obsolete' not in listkeys(pushop.remote, b'namespaces'):
632 return
640 return
633
641
634 repo = pushop.repo
642 repo = pushop.repo
635 # very naive computation, that can be quite expensive on big repo.
643 # very naive computation, that can be quite expensive on big repo.
636 # However: evolution is currently slow on them anyway.
644 # However: evolution is currently slow on them anyway.
637 nodes = (c.node() for c in repo.set(b'::%ln', pushop.futureheads))
645 nodes = (c.node() for c in repo.set(b'::%ln', pushop.futureheads))
638 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
646 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
639
647
640
648
641 @pushdiscovery(b'bookmarks')
649 @pushdiscovery(b'bookmarks')
642 def _pushdiscoverybookmarks(pushop):
650 def _pushdiscoverybookmarks(pushop):
643 ui = pushop.ui
651 ui = pushop.ui
644 repo = pushop.repo.unfiltered()
652 repo = pushop.repo.unfiltered()
645 remote = pushop.remote
653 remote = pushop.remote
646 ui.debug(b"checking for updated bookmarks\n")
654 ui.debug(b"checking for updated bookmarks\n")
647 ancestors = ()
655 ancestors = ()
648 if pushop.revs:
656 if pushop.revs:
649 revnums = pycompat.maplist(repo.changelog.rev, pushop.revs)
657 revnums = pycompat.maplist(repo.changelog.rev, pushop.revs)
650 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
658 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
651
659
652 remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks'))
660 remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks'))
653
661
654 explicit = {
662 explicit = {
655 repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks
663 repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks
656 }
664 }
657
665
658 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
666 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
659 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
667 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
660
668
661
669
662 def _processcompared(pushop, pushed, explicit, remotebms, comp):
670 def _processcompared(pushop, pushed, explicit, remotebms, comp):
663 """take decision on bookmarks to push to the remote repo
671 """take decision on bookmarks to push to the remote repo
664
672
665 Exists to help extensions alter this behavior.
673 Exists to help extensions alter this behavior.
666 """
674 """
667 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
675 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
668
676
669 repo = pushop.repo
677 repo = pushop.repo
670
678
671 for b, scid, dcid in advsrc:
679 for b, scid, dcid in advsrc:
672 if b in explicit:
680 if b in explicit:
673 explicit.remove(b)
681 explicit.remove(b)
674 if not pushed or repo[scid].rev() in pushed:
682 if not pushed or repo[scid].rev() in pushed:
675 pushop.outbookmarks.append((b, dcid, scid))
683 pushop.outbookmarks.append((b, dcid, scid))
676 # search added bookmark
684 # search added bookmark
677 for b, scid, dcid in addsrc:
685 for b, scid, dcid in addsrc:
678 if b in explicit:
686 if b in explicit:
679 explicit.remove(b)
687 explicit.remove(b)
680 if bookmod.isdivergent(b):
688 if bookmod.isdivergent(b):
681 pushop.ui.warn(_(b'cannot push divergent bookmark %s!\n') % b)
689 pushop.ui.warn(_(b'cannot push divergent bookmark %s!\n') % b)
682 pushop.bkresult = 2
690 pushop.bkresult = 2
683 else:
691 else:
684 pushop.outbookmarks.append((b, b'', scid))
692 pushop.outbookmarks.append((b, b'', scid))
685 # search for overwritten bookmark
693 # search for overwritten bookmark
686 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
694 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
687 if b in explicit:
695 if b in explicit:
688 explicit.remove(b)
696 explicit.remove(b)
689 pushop.outbookmarks.append((b, dcid, scid))
697 pushop.outbookmarks.append((b, dcid, scid))
690 # search for bookmark to delete
698 # search for bookmark to delete
691 for b, scid, dcid in adddst:
699 for b, scid, dcid in adddst:
692 if b in explicit:
700 if b in explicit:
693 explicit.remove(b)
701 explicit.remove(b)
694 # treat as "deleted locally"
702 # treat as "deleted locally"
695 pushop.outbookmarks.append((b, dcid, b''))
703 pushop.outbookmarks.append((b, dcid, b''))
696 # identical bookmarks shouldn't get reported
704 # identical bookmarks shouldn't get reported
697 for b, scid, dcid in same:
705 for b, scid, dcid in same:
698 if b in explicit:
706 if b in explicit:
699 explicit.remove(b)
707 explicit.remove(b)
700
708
701 if explicit:
709 if explicit:
702 explicit = sorted(explicit)
710 explicit = sorted(explicit)
703 # we should probably list all of them
711 # we should probably list all of them
704 pushop.ui.warn(
712 pushop.ui.warn(
705 _(
713 _(
706 b'bookmark %s does not exist on the local '
714 b'bookmark %s does not exist on the local '
707 b'or remote repository!\n'
715 b'or remote repository!\n'
708 )
716 )
709 % explicit[0]
717 % explicit[0]
710 )
718 )
711 pushop.bkresult = 2
719 pushop.bkresult = 2
712
720
713 pushop.outbookmarks.sort()
721 pushop.outbookmarks.sort()
714
722
715
723
716 def _pushcheckoutgoing(pushop):
724 def _pushcheckoutgoing(pushop):
717 outgoing = pushop.outgoing
725 outgoing = pushop.outgoing
718 unfi = pushop.repo.unfiltered()
726 unfi = pushop.repo.unfiltered()
719 if not outgoing.missing:
727 if not outgoing.missing:
720 # nothing to push
728 # nothing to push
721 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
729 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
722 return False
730 return False
723 # something to push
731 # something to push
724 if not pushop.force:
732 if not pushop.force:
725 # if repo.obsstore == False --> no obsolete
733 # if repo.obsstore == False --> no obsolete
726 # then, save the iteration
734 # then, save the iteration
727 if unfi.obsstore:
735 if unfi.obsstore:
728 # this message are here for 80 char limit reason
736 # this message are here for 80 char limit reason
729 mso = _(b"push includes obsolete changeset: %s!")
737 mso = _(b"push includes obsolete changeset: %s!")
730 mspd = _(b"push includes phase-divergent changeset: %s!")
738 mspd = _(b"push includes phase-divergent changeset: %s!")
731 mscd = _(b"push includes content-divergent changeset: %s!")
739 mscd = _(b"push includes content-divergent changeset: %s!")
732 mst = {
740 mst = {
733 b"orphan": _(b"push includes orphan changeset: %s!"),
741 b"orphan": _(b"push includes orphan changeset: %s!"),
734 b"phase-divergent": mspd,
742 b"phase-divergent": mspd,
735 b"content-divergent": mscd,
743 b"content-divergent": mscd,
736 }
744 }
737 # If we are to push if there is at least one
745 # If we are to push if there is at least one
738 # obsolete or unstable changeset in missing, at
746 # obsolete or unstable changeset in missing, at
739 # least one of the missinghead will be obsolete or
747 # least one of the missinghead will be obsolete or
740 # unstable. So checking heads only is ok
748 # unstable. So checking heads only is ok
741 for node in outgoing.ancestorsof:
749 for node in outgoing.ancestorsof:
742 ctx = unfi[node]
750 ctx = unfi[node]
743 if ctx.obsolete():
751 if ctx.obsolete():
744 raise error.Abort(mso % ctx)
752 raise error.Abort(mso % ctx)
745 elif ctx.isunstable():
753 elif ctx.isunstable():
746 # TODO print more than one instability in the abort
754 # TODO print more than one instability in the abort
747 # message
755 # message
748 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
756 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
749
757
750 discovery.checkheads(pushop)
758 discovery.checkheads(pushop)
751 return True
759 return True
752
760
753
761
754 # List of names of steps to perform for an outgoing bundle2, order matters.
762 # List of names of steps to perform for an outgoing bundle2, order matters.
755 b2partsgenorder = []
763 b2partsgenorder = []
756
764
757 # Mapping between step name and function
765 # Mapping between step name and function
758 #
766 #
759 # This exists to help extensions wrap steps if necessary
767 # This exists to help extensions wrap steps if necessary
760 b2partsgenmapping = {}
768 b2partsgenmapping = {}
761
769
762
770
763 def b2partsgenerator(stepname, idx=None):
771 def b2partsgenerator(stepname, idx=None):
764 """decorator for function generating bundle2 part
772 """decorator for function generating bundle2 part
765
773
766 The function is added to the step -> function mapping and appended to the
774 The function is added to the step -> function mapping and appended to the
767 list of steps. Beware that decorated functions will be added in order
775 list of steps. Beware that decorated functions will be added in order
768 (this may matter).
776 (this may matter).
769
777
770 You can only use this decorator for new steps, if you want to wrap a step
778 You can only use this decorator for new steps, if you want to wrap a step
771 from an extension, attack the b2partsgenmapping dictionary directly."""
779 from an extension, attack the b2partsgenmapping dictionary directly."""
772
780
773 def dec(func):
781 def dec(func):
774 assert stepname not in b2partsgenmapping
782 assert stepname not in b2partsgenmapping
775 b2partsgenmapping[stepname] = func
783 b2partsgenmapping[stepname] = func
776 if idx is None:
784 if idx is None:
777 b2partsgenorder.append(stepname)
785 b2partsgenorder.append(stepname)
778 else:
786 else:
779 b2partsgenorder.insert(idx, stepname)
787 b2partsgenorder.insert(idx, stepname)
780 return func
788 return func
781
789
782 return dec
790 return dec
783
791
784
792
785 def _pushb2ctxcheckheads(pushop, bundler):
793 def _pushb2ctxcheckheads(pushop, bundler):
786 """Generate race condition checking parts
794 """Generate race condition checking parts
787
795
788 Exists as an independent function to aid extensions
796 Exists as an independent function to aid extensions
789 """
797 """
790 # * 'force' do not check for push race,
798 # * 'force' do not check for push race,
791 # * if we don't push anything, there are nothing to check.
799 # * if we don't push anything, there are nothing to check.
792 if not pushop.force and pushop.outgoing.ancestorsof:
800 if not pushop.force and pushop.outgoing.ancestorsof:
793 allowunrelated = b'related' in bundler.capabilities.get(
801 allowunrelated = b'related' in bundler.capabilities.get(
794 b'checkheads', ()
802 b'checkheads', ()
795 )
803 )
796 emptyremote = pushop.pushbranchmap is None
804 emptyremote = pushop.pushbranchmap is None
797 if not allowunrelated or emptyremote:
805 if not allowunrelated or emptyremote:
798 bundler.newpart(b'check:heads', data=iter(pushop.remoteheads))
806 bundler.newpart(b'check:heads', data=iter(pushop.remoteheads))
799 else:
807 else:
800 affected = set()
808 affected = set()
801 for branch, heads in pycompat.iteritems(pushop.pushbranchmap):
809 for branch, heads in pycompat.iteritems(pushop.pushbranchmap):
802 remoteheads, newheads, unsyncedheads, discardedheads = heads
810 remoteheads, newheads, unsyncedheads, discardedheads = heads
803 if remoteheads is not None:
811 if remoteheads is not None:
804 remote = set(remoteheads)
812 remote = set(remoteheads)
805 affected |= set(discardedheads) & remote
813 affected |= set(discardedheads) & remote
806 affected |= remote - set(newheads)
814 affected |= remote - set(newheads)
807 if affected:
815 if affected:
808 data = iter(sorted(affected))
816 data = iter(sorted(affected))
809 bundler.newpart(b'check:updated-heads', data=data)
817 bundler.newpart(b'check:updated-heads', data=data)
810
818
811
819
812 def _pushing(pushop):
820 def _pushing(pushop):
813 """return True if we are pushing anything"""
821 """return True if we are pushing anything"""
814 return bool(
822 return bool(
815 pushop.outgoing.missing
823 pushop.outgoing.missing
816 or pushop.outdatedphases
824 or pushop.outdatedphases
817 or pushop.outobsmarkers
825 or pushop.outobsmarkers
818 or pushop.outbookmarks
826 or pushop.outbookmarks
819 )
827 )
820
828
821
829
822 @b2partsgenerator(b'check-bookmarks')
830 @b2partsgenerator(b'check-bookmarks')
823 def _pushb2checkbookmarks(pushop, bundler):
831 def _pushb2checkbookmarks(pushop, bundler):
824 """insert bookmark move checking"""
832 """insert bookmark move checking"""
825 if not _pushing(pushop) or pushop.force:
833 if not _pushing(pushop) or pushop.force:
826 return
834 return
827 b2caps = bundle2.bundle2caps(pushop.remote)
835 b2caps = bundle2.bundle2caps(pushop.remote)
828 hasbookmarkcheck = b'bookmarks' in b2caps
836 hasbookmarkcheck = b'bookmarks' in b2caps
829 if not (pushop.outbookmarks and hasbookmarkcheck):
837 if not (pushop.outbookmarks and hasbookmarkcheck):
830 return
838 return
831 data = []
839 data = []
832 for book, old, new in pushop.outbookmarks:
840 for book, old, new in pushop.outbookmarks:
833 data.append((book, old))
841 data.append((book, old))
834 checkdata = bookmod.binaryencode(pushop.repo, data)
842 checkdata = bookmod.binaryencode(pushop.repo, data)
835 bundler.newpart(b'check:bookmarks', data=checkdata)
843 bundler.newpart(b'check:bookmarks', data=checkdata)
836
844
837
845
838 @b2partsgenerator(b'check-phases')
846 @b2partsgenerator(b'check-phases')
839 def _pushb2checkphases(pushop, bundler):
847 def _pushb2checkphases(pushop, bundler):
840 """insert phase move checking"""
848 """insert phase move checking"""
841 if not _pushing(pushop) or pushop.force:
849 if not _pushing(pushop) or pushop.force:
842 return
850 return
843 b2caps = bundle2.bundle2caps(pushop.remote)
851 b2caps = bundle2.bundle2caps(pushop.remote)
844 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
852 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
845 if pushop.remotephases is not None and hasphaseheads:
853 if pushop.remotephases is not None and hasphaseheads:
846 # check that the remote phase has not changed
854 # check that the remote phase has not changed
847 checks = {p: [] for p in phases.allphases}
855 checks = {p: [] for p in phases.allphases}
848 checks[phases.public].extend(pushop.remotephases.publicheads)
856 checks[phases.public].extend(pushop.remotephases.publicheads)
849 checks[phases.draft].extend(pushop.remotephases.draftroots)
857 checks[phases.draft].extend(pushop.remotephases.draftroots)
850 if any(pycompat.itervalues(checks)):
858 if any(pycompat.itervalues(checks)):
851 for phase in checks:
859 for phase in checks:
852 checks[phase].sort()
860 checks[phase].sort()
853 checkdata = phases.binaryencode(checks)
861 checkdata = phases.binaryencode(checks)
854 bundler.newpart(b'check:phases', data=checkdata)
862 bundler.newpart(b'check:phases', data=checkdata)
855
863
856
864
857 @b2partsgenerator(b'changeset')
865 @b2partsgenerator(b'changeset')
858 def _pushb2ctx(pushop, bundler):
866 def _pushb2ctx(pushop, bundler):
859 """handle changegroup push through bundle2
867 """handle changegroup push through bundle2
860
868
861 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
869 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
862 """
870 """
863 if b'changesets' in pushop.stepsdone:
871 if b'changesets' in pushop.stepsdone:
864 return
872 return
865 pushop.stepsdone.add(b'changesets')
873 pushop.stepsdone.add(b'changesets')
866 # Send known heads to the server for race detection.
874 # Send known heads to the server for race detection.
867 if not _pushcheckoutgoing(pushop):
875 if not _pushcheckoutgoing(pushop):
868 return
876 return
869 pushop.repo.prepushoutgoinghooks(pushop)
877 pushop.repo.prepushoutgoinghooks(pushop)
870
878
871 _pushb2ctxcheckheads(pushop, bundler)
879 _pushb2ctxcheckheads(pushop, bundler)
872
880
873 b2caps = bundle2.bundle2caps(pushop.remote)
881 b2caps = bundle2.bundle2caps(pushop.remote)
874 version = b'01'
882 version = b'01'
875 cgversions = b2caps.get(b'changegroup')
883 cgversions = b2caps.get(b'changegroup')
876 if cgversions: # 3.1 and 3.2 ship with an empty value
884 if cgversions: # 3.1 and 3.2 ship with an empty value
877 cgversions = [
885 cgversions = [
878 v
886 v
879 for v in cgversions
887 for v in cgversions
880 if v in changegroup.supportedoutgoingversions(pushop.repo)
888 if v in changegroup.supportedoutgoingversions(pushop.repo)
881 ]
889 ]
882 if not cgversions:
890 if not cgversions:
883 raise error.Abort(_(b'no common changegroup version'))
891 raise error.Abort(_(b'no common changegroup version'))
884 version = max(cgversions)
892 version = max(cgversions)
885
893
886 remote_sidedata = bundle2.read_remote_wanted_sidedata(pushop.remote)
894 remote_sidedata = bundle2.read_remote_wanted_sidedata(pushop.remote)
887 cgstream = changegroup.makestream(
895 cgstream = changegroup.makestream(
888 pushop.repo,
896 pushop.repo,
889 pushop.outgoing,
897 pushop.outgoing,
890 version,
898 version,
891 b'push',
899 b'push',
892 bundlecaps=b2caps,
900 bundlecaps=b2caps,
893 remote_sidedata=remote_sidedata,
901 remote_sidedata=remote_sidedata,
894 )
902 )
895 cgpart = bundler.newpart(b'changegroup', data=cgstream)
903 cgpart = bundler.newpart(b'changegroup', data=cgstream)
896 if cgversions:
904 if cgversions:
897 cgpart.addparam(b'version', version)
905 cgpart.addparam(b'version', version)
898 if scmutil.istreemanifest(pushop.repo):
906 if scmutil.istreemanifest(pushop.repo):
899 cgpart.addparam(b'treemanifest', b'1')
907 cgpart.addparam(b'treemanifest', b'1')
900 if repository.REPO_FEATURE_SIDE_DATA in pushop.repo.features:
908 if repository.REPO_FEATURE_SIDE_DATA in pushop.repo.features:
901 cgpart.addparam(b'exp-sidedata', b'1')
909 cgpart.addparam(b'exp-sidedata', b'1')
902
910
903 def handlereply(op):
911 def handlereply(op):
904 """extract addchangegroup returns from server reply"""
912 """extract addchangegroup returns from server reply"""
905 cgreplies = op.records.getreplies(cgpart.id)
913 cgreplies = op.records.getreplies(cgpart.id)
906 assert len(cgreplies[b'changegroup']) == 1
914 assert len(cgreplies[b'changegroup']) == 1
907 pushop.cgresult = cgreplies[b'changegroup'][0][b'return']
915 pushop.cgresult = cgreplies[b'changegroup'][0][b'return']
908
916
909 return handlereply
917 return handlereply
910
918
911
919
912 @b2partsgenerator(b'phase')
920 @b2partsgenerator(b'phase')
913 def _pushb2phases(pushop, bundler):
921 def _pushb2phases(pushop, bundler):
914 """handle phase push through bundle2"""
922 """handle phase push through bundle2"""
915 if b'phases' in pushop.stepsdone:
923 if b'phases' in pushop.stepsdone:
916 return
924 return
917 b2caps = bundle2.bundle2caps(pushop.remote)
925 b2caps = bundle2.bundle2caps(pushop.remote)
918 ui = pushop.repo.ui
926 ui = pushop.repo.ui
919
927
920 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
928 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
921 haspushkey = b'pushkey' in b2caps
929 haspushkey = b'pushkey' in b2caps
922 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
930 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
923
931
924 if hasphaseheads and not legacyphase:
932 if hasphaseheads and not legacyphase:
925 return _pushb2phaseheads(pushop, bundler)
933 return _pushb2phaseheads(pushop, bundler)
926 elif haspushkey:
934 elif haspushkey:
927 return _pushb2phasespushkey(pushop, bundler)
935 return _pushb2phasespushkey(pushop, bundler)
928
936
929
937
930 def _pushb2phaseheads(pushop, bundler):
938 def _pushb2phaseheads(pushop, bundler):
931 """push phase information through a bundle2 - binary part"""
939 """push phase information through a bundle2 - binary part"""
932 pushop.stepsdone.add(b'phases')
940 pushop.stepsdone.add(b'phases')
933 if pushop.outdatedphases:
941 if pushop.outdatedphases:
934 updates = {p: [] for p in phases.allphases}
942 updates = {p: [] for p in phases.allphases}
935 updates[0].extend(h.node() for h in pushop.outdatedphases)
943 updates[0].extend(h.node() for h in pushop.outdatedphases)
936 phasedata = phases.binaryencode(updates)
944 phasedata = phases.binaryencode(updates)
937 bundler.newpart(b'phase-heads', data=phasedata)
945 bundler.newpart(b'phase-heads', data=phasedata)
938
946
939
947
940 def _pushb2phasespushkey(pushop, bundler):
948 def _pushb2phasespushkey(pushop, bundler):
941 """push phase information through a bundle2 - pushkey part"""
949 """push phase information through a bundle2 - pushkey part"""
942 pushop.stepsdone.add(b'phases')
950 pushop.stepsdone.add(b'phases')
943 part2node = []
951 part2node = []
944
952
945 def handlefailure(pushop, exc):
953 def handlefailure(pushop, exc):
946 targetid = int(exc.partid)
954 targetid = int(exc.partid)
947 for partid, node in part2node:
955 for partid, node in part2node:
948 if partid == targetid:
956 if partid == targetid:
949 raise error.Abort(_(b'updating %s to public failed') % node)
957 raise error.Abort(_(b'updating %s to public failed') % node)
950
958
951 enc = pushkey.encode
959 enc = pushkey.encode
952 for newremotehead in pushop.outdatedphases:
960 for newremotehead in pushop.outdatedphases:
953 part = bundler.newpart(b'pushkey')
961 part = bundler.newpart(b'pushkey')
954 part.addparam(b'namespace', enc(b'phases'))
962 part.addparam(b'namespace', enc(b'phases'))
955 part.addparam(b'key', enc(newremotehead.hex()))
963 part.addparam(b'key', enc(newremotehead.hex()))
956 part.addparam(b'old', enc(b'%d' % phases.draft))
964 part.addparam(b'old', enc(b'%d' % phases.draft))
957 part.addparam(b'new', enc(b'%d' % phases.public))
965 part.addparam(b'new', enc(b'%d' % phases.public))
958 part2node.append((part.id, newremotehead))
966 part2node.append((part.id, newremotehead))
959 pushop.pkfailcb[part.id] = handlefailure
967 pushop.pkfailcb[part.id] = handlefailure
960
968
961 def handlereply(op):
969 def handlereply(op):
962 for partid, node in part2node:
970 for partid, node in part2node:
963 partrep = op.records.getreplies(partid)
971 partrep = op.records.getreplies(partid)
964 results = partrep[b'pushkey']
972 results = partrep[b'pushkey']
965 assert len(results) <= 1
973 assert len(results) <= 1
966 msg = None
974 msg = None
967 if not results:
975 if not results:
968 msg = _(b'server ignored update of %s to public!\n') % node
976 msg = _(b'server ignored update of %s to public!\n') % node
969 elif not int(results[0][b'return']):
977 elif not int(results[0][b'return']):
970 msg = _(b'updating %s to public failed!\n') % node
978 msg = _(b'updating %s to public failed!\n') % node
971 if msg is not None:
979 if msg is not None:
972 pushop.ui.warn(msg)
980 pushop.ui.warn(msg)
973
981
974 return handlereply
982 return handlereply
975
983
976
984
977 @b2partsgenerator(b'obsmarkers')
985 @b2partsgenerator(b'obsmarkers')
978 def _pushb2obsmarkers(pushop, bundler):
986 def _pushb2obsmarkers(pushop, bundler):
979 if b'obsmarkers' in pushop.stepsdone:
987 if b'obsmarkers' in pushop.stepsdone:
980 return
988 return
981 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
989 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
982 if obsolete.commonversion(remoteversions) is None:
990 if obsolete.commonversion(remoteversions) is None:
983 return
991 return
984 pushop.stepsdone.add(b'obsmarkers')
992 pushop.stepsdone.add(b'obsmarkers')
985 if pushop.outobsmarkers:
993 if pushop.outobsmarkers:
986 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
994 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
987 bundle2.buildobsmarkerspart(bundler, markers)
995 bundle2.buildobsmarkerspart(bundler, markers)
988
996
989
997
990 @b2partsgenerator(b'bookmarks')
998 @b2partsgenerator(b'bookmarks')
991 def _pushb2bookmarks(pushop, bundler):
999 def _pushb2bookmarks(pushop, bundler):
992 """handle bookmark push through bundle2"""
1000 """handle bookmark push through bundle2"""
993 if b'bookmarks' in pushop.stepsdone:
1001 if b'bookmarks' in pushop.stepsdone:
994 return
1002 return
995 b2caps = bundle2.bundle2caps(pushop.remote)
1003 b2caps = bundle2.bundle2caps(pushop.remote)
996
1004
997 legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange')
1005 legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange')
998 legacybooks = b'bookmarks' in legacy
1006 legacybooks = b'bookmarks' in legacy
999
1007
1000 if not legacybooks and b'bookmarks' in b2caps:
1008 if not legacybooks and b'bookmarks' in b2caps:
1001 return _pushb2bookmarkspart(pushop, bundler)
1009 return _pushb2bookmarkspart(pushop, bundler)
1002 elif b'pushkey' in b2caps:
1010 elif b'pushkey' in b2caps:
1003 return _pushb2bookmarkspushkey(pushop, bundler)
1011 return _pushb2bookmarkspushkey(pushop, bundler)
1004
1012
1005
1013
1006 def _bmaction(old, new):
1014 def _bmaction(old, new):
1007 """small utility for bookmark pushing"""
1015 """small utility for bookmark pushing"""
1008 if not old:
1016 if not old:
1009 return b'export'
1017 return b'export'
1010 elif not new:
1018 elif not new:
1011 return b'delete'
1019 return b'delete'
1012 return b'update'
1020 return b'update'
1013
1021
1014
1022
1015 def _abortonsecretctx(pushop, node, b):
1023 def _abortonsecretctx(pushop, node, b):
1016 """abort if a given bookmark points to a secret changeset"""
1024 """abort if a given bookmark points to a secret changeset"""
1017 if node and pushop.repo[node].phase() == phases.secret:
1025 if node and pushop.repo[node].phase() == phases.secret:
1018 raise error.Abort(
1026 raise error.Abort(
1019 _(b'cannot push bookmark %s as it points to a secret changeset') % b
1027 _(b'cannot push bookmark %s as it points to a secret changeset') % b
1020 )
1028 )
1021
1029
1022
1030
1023 def _pushb2bookmarkspart(pushop, bundler):
1031 def _pushb2bookmarkspart(pushop, bundler):
1024 pushop.stepsdone.add(b'bookmarks')
1032 pushop.stepsdone.add(b'bookmarks')
1025 if not pushop.outbookmarks:
1033 if not pushop.outbookmarks:
1026 return
1034 return
1027
1035
1028 allactions = []
1036 allactions = []
1029 data = []
1037 data = []
1030 for book, old, new in pushop.outbookmarks:
1038 for book, old, new in pushop.outbookmarks:
1031 _abortonsecretctx(pushop, new, book)
1039 _abortonsecretctx(pushop, new, book)
1032 data.append((book, new))
1040 data.append((book, new))
1033 allactions.append((book, _bmaction(old, new)))
1041 allactions.append((book, _bmaction(old, new)))
1034 checkdata = bookmod.binaryencode(pushop.repo, data)
1042 checkdata = bookmod.binaryencode(pushop.repo, data)
1035 bundler.newpart(b'bookmarks', data=checkdata)
1043 bundler.newpart(b'bookmarks', data=checkdata)
1036
1044
1037 def handlereply(op):
1045 def handlereply(op):
1038 ui = pushop.ui
1046 ui = pushop.ui
1039 # if success
1047 # if success
1040 for book, action in allactions:
1048 for book, action in allactions:
1041 ui.status(bookmsgmap[action][0] % book)
1049 ui.status(bookmsgmap[action][0] % book)
1042
1050
1043 return handlereply
1051 return handlereply
1044
1052
1045
1053
1046 def _pushb2bookmarkspushkey(pushop, bundler):
1054 def _pushb2bookmarkspushkey(pushop, bundler):
1047 pushop.stepsdone.add(b'bookmarks')
1055 pushop.stepsdone.add(b'bookmarks')
1048 part2book = []
1056 part2book = []
1049 enc = pushkey.encode
1057 enc = pushkey.encode
1050
1058
1051 def handlefailure(pushop, exc):
1059 def handlefailure(pushop, exc):
1052 targetid = int(exc.partid)
1060 targetid = int(exc.partid)
1053 for partid, book, action in part2book:
1061 for partid, book, action in part2book:
1054 if partid == targetid:
1062 if partid == targetid:
1055 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1063 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1056 # we should not be called for part we did not generated
1064 # we should not be called for part we did not generated
1057 assert False
1065 assert False
1058
1066
1059 for book, old, new in pushop.outbookmarks:
1067 for book, old, new in pushop.outbookmarks:
1060 _abortonsecretctx(pushop, new, book)
1068 _abortonsecretctx(pushop, new, book)
1061 part = bundler.newpart(b'pushkey')
1069 part = bundler.newpart(b'pushkey')
1062 part.addparam(b'namespace', enc(b'bookmarks'))
1070 part.addparam(b'namespace', enc(b'bookmarks'))
1063 part.addparam(b'key', enc(book))
1071 part.addparam(b'key', enc(book))
1064 part.addparam(b'old', enc(hex(old)))
1072 part.addparam(b'old', enc(hex(old)))
1065 part.addparam(b'new', enc(hex(new)))
1073 part.addparam(b'new', enc(hex(new)))
1066 action = b'update'
1074 action = b'update'
1067 if not old:
1075 if not old:
1068 action = b'export'
1076 action = b'export'
1069 elif not new:
1077 elif not new:
1070 action = b'delete'
1078 action = b'delete'
1071 part2book.append((part.id, book, action))
1079 part2book.append((part.id, book, action))
1072 pushop.pkfailcb[part.id] = handlefailure
1080 pushop.pkfailcb[part.id] = handlefailure
1073
1081
1074 def handlereply(op):
1082 def handlereply(op):
1075 ui = pushop.ui
1083 ui = pushop.ui
1076 for partid, book, action in part2book:
1084 for partid, book, action in part2book:
1077 partrep = op.records.getreplies(partid)
1085 partrep = op.records.getreplies(partid)
1078 results = partrep[b'pushkey']
1086 results = partrep[b'pushkey']
1079 assert len(results) <= 1
1087 assert len(results) <= 1
1080 if not results:
1088 if not results:
1081 pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book)
1089 pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book)
1082 else:
1090 else:
1083 ret = int(results[0][b'return'])
1091 ret = int(results[0][b'return'])
1084 if ret:
1092 if ret:
1085 ui.status(bookmsgmap[action][0] % book)
1093 ui.status(bookmsgmap[action][0] % book)
1086 else:
1094 else:
1087 ui.warn(bookmsgmap[action][1] % book)
1095 ui.warn(bookmsgmap[action][1] % book)
1088 if pushop.bkresult is not None:
1096 if pushop.bkresult is not None:
1089 pushop.bkresult = 1
1097 pushop.bkresult = 1
1090
1098
1091 return handlereply
1099 return handlereply
1092
1100
1093
1101
1094 @b2partsgenerator(b'pushvars', idx=0)
1102 @b2partsgenerator(b'pushvars', idx=0)
1095 def _getbundlesendvars(pushop, bundler):
1103 def _getbundlesendvars(pushop, bundler):
1096 '''send shellvars via bundle2'''
1104 '''send shellvars via bundle2'''
1097 pushvars = pushop.pushvars
1105 pushvars = pushop.pushvars
1098 if pushvars:
1106 if pushvars:
1099 shellvars = {}
1107 shellvars = {}
1100 for raw in pushvars:
1108 for raw in pushvars:
1101 if b'=' not in raw:
1109 if b'=' not in raw:
1102 msg = (
1110 msg = (
1103 b"unable to parse variable '%s', should follow "
1111 b"unable to parse variable '%s', should follow "
1104 b"'KEY=VALUE' or 'KEY=' format"
1112 b"'KEY=VALUE' or 'KEY=' format"
1105 )
1113 )
1106 raise error.Abort(msg % raw)
1114 raise error.Abort(msg % raw)
1107 k, v = raw.split(b'=', 1)
1115 k, v = raw.split(b'=', 1)
1108 shellvars[k] = v
1116 shellvars[k] = v
1109
1117
1110 part = bundler.newpart(b'pushvars')
1118 part = bundler.newpart(b'pushvars')
1111
1119
1112 for key, value in pycompat.iteritems(shellvars):
1120 for key, value in pycompat.iteritems(shellvars):
1113 part.addparam(key, value, mandatory=False)
1121 part.addparam(key, value, mandatory=False)
1114
1122
1115
1123
1116 def _pushbundle2(pushop):
1124 def _pushbundle2(pushop):
1117 """push data to the remote using bundle2
1125 """push data to the remote using bundle2
1118
1126
1119 The only currently supported type of data is changegroup but this will
1127 The only currently supported type of data is changegroup but this will
1120 evolve in the future."""
1128 evolve in the future."""
1121 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1129 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1122 pushback = pushop.trmanager and pushop.ui.configbool(
1130 pushback = pushop.trmanager and pushop.ui.configbool(
1123 b'experimental', b'bundle2.pushback'
1131 b'experimental', b'bundle2.pushback'
1124 )
1132 )
1125
1133
1126 # create reply capability
1134 # create reply capability
1127 capsblob = bundle2.encodecaps(
1135 capsblob = bundle2.encodecaps(
1128 bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client')
1136 bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client')
1129 )
1137 )
1130 bundler.newpart(b'replycaps', data=capsblob)
1138 bundler.newpart(b'replycaps', data=capsblob)
1131 replyhandlers = []
1139 replyhandlers = []
1132 for partgenname in b2partsgenorder:
1140 for partgenname in b2partsgenorder:
1133 partgen = b2partsgenmapping[partgenname]
1141 partgen = b2partsgenmapping[partgenname]
1134 ret = partgen(pushop, bundler)
1142 ret = partgen(pushop, bundler)
1135 if callable(ret):
1143 if callable(ret):
1136 replyhandlers.append(ret)
1144 replyhandlers.append(ret)
1137 # do not push if nothing to push
1145 # do not push if nothing to push
1138 if bundler.nbparts <= 1:
1146 if bundler.nbparts <= 1:
1139 return
1147 return
1140 stream = util.chunkbuffer(bundler.getchunks())
1148 stream = util.chunkbuffer(bundler.getchunks())
1141 try:
1149 try:
1142 try:
1150 try:
1143 with pushop.remote.commandexecutor() as e:
1151 with pushop.remote.commandexecutor() as e:
1144 reply = e.callcommand(
1152 reply = e.callcommand(
1145 b'unbundle',
1153 b'unbundle',
1146 {
1154 {
1147 b'bundle': stream,
1155 b'bundle': stream,
1148 b'heads': [b'force'],
1156 b'heads': [b'force'],
1149 b'url': pushop.remote.url(),
1157 b'url': pushop.remote.url(),
1150 },
1158 },
1151 ).result()
1159 ).result()
1152 except error.BundleValueError as exc:
1160 except error.BundleValueError as exc:
1153 raise error.RemoteError(_(b'missing support for %s') % exc)
1161 raise error.RemoteError(_(b'missing support for %s') % exc)
1154 try:
1162 try:
1155 trgetter = None
1163 trgetter = None
1156 if pushback:
1164 if pushback:
1157 trgetter = pushop.trmanager.transaction
1165 trgetter = pushop.trmanager.transaction
1158 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1166 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1159 except error.BundleValueError as exc:
1167 except error.BundleValueError as exc:
1160 raise error.RemoteError(_(b'missing support for %s') % exc)
1168 raise error.RemoteError(_(b'missing support for %s') % exc)
1161 except bundle2.AbortFromPart as exc:
1169 except bundle2.AbortFromPart as exc:
1162 pushop.ui.error(_(b'remote: %s\n') % exc)
1170 pushop.ui.error(_(b'remote: %s\n') % exc)
1163 if exc.hint is not None:
1171 if exc.hint is not None:
1164 pushop.ui.error(_(b'remote: %s\n') % (b'(%s)' % exc.hint))
1172 pushop.ui.error(_(b'remote: %s\n') % (b'(%s)' % exc.hint))
1165 raise error.RemoteError(_(b'push failed on remote'))
1173 raise error.RemoteError(_(b'push failed on remote'))
1166 except error.PushkeyFailed as exc:
1174 except error.PushkeyFailed as exc:
1167 partid = int(exc.partid)
1175 partid = int(exc.partid)
1168 if partid not in pushop.pkfailcb:
1176 if partid not in pushop.pkfailcb:
1169 raise
1177 raise
1170 pushop.pkfailcb[partid](pushop, exc)
1178 pushop.pkfailcb[partid](pushop, exc)
1171 for rephand in replyhandlers:
1179 for rephand in replyhandlers:
1172 rephand(op)
1180 rephand(op)
1173
1181
1174
1182
1175 def _pushchangeset(pushop):
1183 def _pushchangeset(pushop):
1176 """Make the actual push of changeset bundle to remote repo"""
1184 """Make the actual push of changeset bundle to remote repo"""
1177 if b'changesets' in pushop.stepsdone:
1185 if b'changesets' in pushop.stepsdone:
1178 return
1186 return
1179 pushop.stepsdone.add(b'changesets')
1187 pushop.stepsdone.add(b'changesets')
1180 if not _pushcheckoutgoing(pushop):
1188 if not _pushcheckoutgoing(pushop):
1181 return
1189 return
1182
1190
1183 # Should have verified this in push().
1191 # Should have verified this in push().
1184 assert pushop.remote.capable(b'unbundle')
1192 assert pushop.remote.capable(b'unbundle')
1185
1193
1186 pushop.repo.prepushoutgoinghooks(pushop)
1194 pushop.repo.prepushoutgoinghooks(pushop)
1187 outgoing = pushop.outgoing
1195 outgoing = pushop.outgoing
1188 # TODO: get bundlecaps from remote
1196 # TODO: get bundlecaps from remote
1189 bundlecaps = None
1197 bundlecaps = None
1190 # create a changegroup from local
1198 # create a changegroup from local
1191 if pushop.revs is None and not (
1199 if pushop.revs is None and not (
1192 outgoing.excluded or pushop.repo.changelog.filteredrevs
1200 outgoing.excluded or pushop.repo.changelog.filteredrevs
1193 ):
1201 ):
1194 # push everything,
1202 # push everything,
1195 # use the fast path, no race possible on push
1203 # use the fast path, no race possible on push
1196 cg = changegroup.makechangegroup(
1204 cg = changegroup.makechangegroup(
1197 pushop.repo,
1205 pushop.repo,
1198 outgoing,
1206 outgoing,
1199 b'01',
1207 b'01',
1200 b'push',
1208 b'push',
1201 fastpath=True,
1209 fastpath=True,
1202 bundlecaps=bundlecaps,
1210 bundlecaps=bundlecaps,
1203 )
1211 )
1204 else:
1212 else:
1205 cg = changegroup.makechangegroup(
1213 cg = changegroup.makechangegroup(
1206 pushop.repo, outgoing, b'01', b'push', bundlecaps=bundlecaps
1214 pushop.repo, outgoing, b'01', b'push', bundlecaps=bundlecaps
1207 )
1215 )
1208
1216
1209 # apply changegroup to remote
1217 # apply changegroup to remote
1210 # local repo finds heads on server, finds out what
1218 # local repo finds heads on server, finds out what
1211 # revs it must push. once revs transferred, if server
1219 # revs it must push. once revs transferred, if server
1212 # finds it has different heads (someone else won
1220 # finds it has different heads (someone else won
1213 # commit/push race), server aborts.
1221 # commit/push race), server aborts.
1214 if pushop.force:
1222 if pushop.force:
1215 remoteheads = [b'force']
1223 remoteheads = [b'force']
1216 else:
1224 else:
1217 remoteheads = pushop.remoteheads
1225 remoteheads = pushop.remoteheads
1218 # ssh: return remote's addchangegroup()
1226 # ssh: return remote's addchangegroup()
1219 # http: return remote's addchangegroup() or 0 for error
1227 # http: return remote's addchangegroup() or 0 for error
1220 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url())
1228 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url())
1221
1229
1222
1230
1223 def _pushsyncphase(pushop):
1231 def _pushsyncphase(pushop):
1224 """synchronise phase information locally and remotely"""
1232 """synchronise phase information locally and remotely"""
1225 cheads = pushop.commonheads
1233 cheads = pushop.commonheads
1226 # even when we don't push, exchanging phase data is useful
1234 # even when we don't push, exchanging phase data is useful
1227 remotephases = listkeys(pushop.remote, b'phases')
1235 remotephases = listkeys(pushop.remote, b'phases')
1228 if (
1236 if (
1229 pushop.ui.configbool(b'ui', b'_usedassubrepo')
1237 pushop.ui.configbool(b'ui', b'_usedassubrepo')
1230 and remotephases # server supports phases
1238 and remotephases # server supports phases
1231 and pushop.cgresult is None # nothing was pushed
1239 and pushop.cgresult is None # nothing was pushed
1232 and remotephases.get(b'publishing', False)
1240 and remotephases.get(b'publishing', False)
1233 ):
1241 ):
1234 # When:
1242 # When:
1235 # - this is a subrepo push
1243 # - this is a subrepo push
1236 # - and remote support phase
1244 # - and remote support phase
1237 # - and no changeset was pushed
1245 # - and no changeset was pushed
1238 # - and remote is publishing
1246 # - and remote is publishing
1239 # We may be in issue 3871 case!
1247 # We may be in issue 3871 case!
1240 # We drop the possible phase synchronisation done by
1248 # We drop the possible phase synchronisation done by
1241 # courtesy to publish changesets possibly locally draft
1249 # courtesy to publish changesets possibly locally draft
1242 # on the remote.
1250 # on the remote.
1243 remotephases = {b'publishing': b'True'}
1251 remotephases = {b'publishing': b'True'}
1244 if not remotephases: # old server or public only reply from non-publishing
1252 if not remotephases: # old server or public only reply from non-publishing
1245 _localphasemove(pushop, cheads)
1253 _localphasemove(pushop, cheads)
1246 # don't push any phase data as there is nothing to push
1254 # don't push any phase data as there is nothing to push
1247 else:
1255 else:
1248 ana = phases.analyzeremotephases(pushop.repo, cheads, remotephases)
1256 ana = phases.analyzeremotephases(pushop.repo, cheads, remotephases)
1249 pheads, droots = ana
1257 pheads, droots = ana
1250 ### Apply remote phase on local
1258 ### Apply remote phase on local
1251 if remotephases.get(b'publishing', False):
1259 if remotephases.get(b'publishing', False):
1252 _localphasemove(pushop, cheads)
1260 _localphasemove(pushop, cheads)
1253 else: # publish = False
1261 else: # publish = False
1254 _localphasemove(pushop, pheads)
1262 _localphasemove(pushop, pheads)
1255 _localphasemove(pushop, cheads, phases.draft)
1263 _localphasemove(pushop, cheads, phases.draft)
1256 ### Apply local phase on remote
1264 ### Apply local phase on remote
1257
1265
1258 if pushop.cgresult:
1266 if pushop.cgresult:
1259 if b'phases' in pushop.stepsdone:
1267 if b'phases' in pushop.stepsdone:
1260 # phases already pushed though bundle2
1268 # phases already pushed though bundle2
1261 return
1269 return
1262 outdated = pushop.outdatedphases
1270 outdated = pushop.outdatedphases
1263 else:
1271 else:
1264 outdated = pushop.fallbackoutdatedphases
1272 outdated = pushop.fallbackoutdatedphases
1265
1273
1266 pushop.stepsdone.add(b'phases')
1274 pushop.stepsdone.add(b'phases')
1267
1275
1268 # filter heads already turned public by the push
1276 # filter heads already turned public by the push
1269 outdated = [c for c in outdated if c.node() not in pheads]
1277 outdated = [c for c in outdated if c.node() not in pheads]
1270 # fallback to independent pushkey command
1278 # fallback to independent pushkey command
1271 for newremotehead in outdated:
1279 for newremotehead in outdated:
1272 with pushop.remote.commandexecutor() as e:
1280 with pushop.remote.commandexecutor() as e:
1273 r = e.callcommand(
1281 r = e.callcommand(
1274 b'pushkey',
1282 b'pushkey',
1275 {
1283 {
1276 b'namespace': b'phases',
1284 b'namespace': b'phases',
1277 b'key': newremotehead.hex(),
1285 b'key': newremotehead.hex(),
1278 b'old': b'%d' % phases.draft,
1286 b'old': b'%d' % phases.draft,
1279 b'new': b'%d' % phases.public,
1287 b'new': b'%d' % phases.public,
1280 },
1288 },
1281 ).result()
1289 ).result()
1282
1290
1283 if not r:
1291 if not r:
1284 pushop.ui.warn(
1292 pushop.ui.warn(
1285 _(b'updating %s to public failed!\n') % newremotehead
1293 _(b'updating %s to public failed!\n') % newremotehead
1286 )
1294 )
1287
1295
1288
1296
1289 def _localphasemove(pushop, nodes, phase=phases.public):
1297 def _localphasemove(pushop, nodes, phase=phases.public):
1290 """move <nodes> to <phase> in the local source repo"""
1298 """move <nodes> to <phase> in the local source repo"""
1291 if pushop.trmanager:
1299 if pushop.trmanager:
1292 phases.advanceboundary(
1300 phases.advanceboundary(
1293 pushop.repo, pushop.trmanager.transaction(), phase, nodes
1301 pushop.repo, pushop.trmanager.transaction(), phase, nodes
1294 )
1302 )
1295 else:
1303 else:
1296 # repo is not locked, do not change any phases!
1304 # repo is not locked, do not change any phases!
1297 # Informs the user that phases should have been moved when
1305 # Informs the user that phases should have been moved when
1298 # applicable.
1306 # applicable.
1299 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1307 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1300 phasestr = phases.phasenames[phase]
1308 phasestr = phases.phasenames[phase]
1301 if actualmoves:
1309 if actualmoves:
1302 pushop.ui.status(
1310 pushop.ui.status(
1303 _(
1311 _(
1304 b'cannot lock source repo, skipping '
1312 b'cannot lock source repo, skipping '
1305 b'local %s phase update\n'
1313 b'local %s phase update\n'
1306 )
1314 )
1307 % phasestr
1315 % phasestr
1308 )
1316 )
1309
1317
1310
1318
1311 def _pushobsolete(pushop):
1319 def _pushobsolete(pushop):
1312 """utility function to push obsolete markers to a remote"""
1320 """utility function to push obsolete markers to a remote"""
1313 if b'obsmarkers' in pushop.stepsdone:
1321 if b'obsmarkers' in pushop.stepsdone:
1314 return
1322 return
1315 repo = pushop.repo
1323 repo = pushop.repo
1316 remote = pushop.remote
1324 remote = pushop.remote
1317 pushop.stepsdone.add(b'obsmarkers')
1325 pushop.stepsdone.add(b'obsmarkers')
1318 if pushop.outobsmarkers:
1326 if pushop.outobsmarkers:
1319 pushop.ui.debug(b'try to push obsolete markers to remote\n')
1327 pushop.ui.debug(b'try to push obsolete markers to remote\n')
1320 rslts = []
1328 rslts = []
1321 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1329 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1322 remotedata = obsolete._pushkeyescape(markers)
1330 remotedata = obsolete._pushkeyescape(markers)
1323 for key in sorted(remotedata, reverse=True):
1331 for key in sorted(remotedata, reverse=True):
1324 # reverse sort to ensure we end with dump0
1332 # reverse sort to ensure we end with dump0
1325 data = remotedata[key]
1333 data = remotedata[key]
1326 rslts.append(remote.pushkey(b'obsolete', key, b'', data))
1334 rslts.append(remote.pushkey(b'obsolete', key, b'', data))
1327 if [r for r in rslts if not r]:
1335 if [r for r in rslts if not r]:
1328 msg = _(b'failed to push some obsolete markers!\n')
1336 msg = _(b'failed to push some obsolete markers!\n')
1329 repo.ui.warn(msg)
1337 repo.ui.warn(msg)
1330
1338
1331
1339
1332 def _pushbookmark(pushop):
1340 def _pushbookmark(pushop):
1333 """Update bookmark position on remote"""
1341 """Update bookmark position on remote"""
1334 if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone:
1342 if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone:
1335 return
1343 return
1336 pushop.stepsdone.add(b'bookmarks')
1344 pushop.stepsdone.add(b'bookmarks')
1337 ui = pushop.ui
1345 ui = pushop.ui
1338 remote = pushop.remote
1346 remote = pushop.remote
1339
1347
1340 for b, old, new in pushop.outbookmarks:
1348 for b, old, new in pushop.outbookmarks:
1341 action = b'update'
1349 action = b'update'
1342 if not old:
1350 if not old:
1343 action = b'export'
1351 action = b'export'
1344 elif not new:
1352 elif not new:
1345 action = b'delete'
1353 action = b'delete'
1346
1354
1347 with remote.commandexecutor() as e:
1355 with remote.commandexecutor() as e:
1348 r = e.callcommand(
1356 r = e.callcommand(
1349 b'pushkey',
1357 b'pushkey',
1350 {
1358 {
1351 b'namespace': b'bookmarks',
1359 b'namespace': b'bookmarks',
1352 b'key': b,
1360 b'key': b,
1353 b'old': hex(old),
1361 b'old': hex(old),
1354 b'new': hex(new),
1362 b'new': hex(new),
1355 },
1363 },
1356 ).result()
1364 ).result()
1357
1365
1358 if r:
1366 if r:
1359 ui.status(bookmsgmap[action][0] % b)
1367 ui.status(bookmsgmap[action][0] % b)
1360 else:
1368 else:
1361 ui.warn(bookmsgmap[action][1] % b)
1369 ui.warn(bookmsgmap[action][1] % b)
1362 # discovery can have set the value form invalid entry
1370 # discovery can have set the value form invalid entry
1363 if pushop.bkresult is not None:
1371 if pushop.bkresult is not None:
1364 pushop.bkresult = 1
1372 pushop.bkresult = 1
1365
1373
1366
1374
1367 class pulloperation(object):
1375 class pulloperation(object):
1368 """A object that represent a single pull operation
1376 """A object that represent a single pull operation
1369
1377
1370 It purpose is to carry pull related state and very common operation.
1378 It purpose is to carry pull related state and very common operation.
1371
1379
1372 A new should be created at the beginning of each pull and discarded
1380 A new should be created at the beginning of each pull and discarded
1373 afterward.
1381 afterward.
1374 """
1382 """
1375
1383
1376 def __init__(
1384 def __init__(
1377 self,
1385 self,
1378 repo,
1386 repo,
1379 remote,
1387 remote,
1380 heads=None,
1388 heads=None,
1381 force=False,
1389 force=False,
1382 bookmarks=(),
1390 bookmarks=(),
1383 remotebookmarks=None,
1391 remotebookmarks=None,
1384 streamclonerequested=None,
1392 streamclonerequested=None,
1385 includepats=None,
1393 includepats=None,
1386 excludepats=None,
1394 excludepats=None,
1387 depth=None,
1395 depth=None,
1388 path=None,
1396 path=None,
1389 ):
1397 ):
1390 # repo we pull into
1398 # repo we pull into
1391 self.repo = repo
1399 self.repo = repo
1392 # repo we pull from
1400 # repo we pull from
1393 self.remote = remote
1401 self.remote = remote
1394 # path object used to build this remote
1402 # path object used to build this remote
1395 #
1403 #
1396 # Ideally, the remote peer would carry that directly.
1404 # Ideally, the remote peer would carry that directly.
1397 self.remote_path = path
1405 self.remote_path = path
1398 # revision we try to pull (None is "all")
1406 # revision we try to pull (None is "all")
1399 self.heads = heads
1407 self.heads = heads
1400 # bookmark pulled explicitly
1408 # bookmark pulled explicitly
1401 self.explicitbookmarks = [
1409 self.explicitbookmarks = [
1402 repo._bookmarks.expandname(bookmark) for bookmark in bookmarks
1410 repo._bookmarks.expandname(bookmark) for bookmark in bookmarks
1403 ]
1411 ]
1404 # do we force pull?
1412 # do we force pull?
1405 self.force = force
1413 self.force = force
1406 # whether a streaming clone was requested
1414 # whether a streaming clone was requested
1407 self.streamclonerequested = streamclonerequested
1415 self.streamclonerequested = streamclonerequested
1408 # transaction manager
1416 # transaction manager
1409 self.trmanager = None
1417 self.trmanager = None
1410 # set of common changeset between local and remote before pull
1418 # set of common changeset between local and remote before pull
1411 self.common = None
1419 self.common = None
1412 # set of pulled head
1420 # set of pulled head
1413 self.rheads = None
1421 self.rheads = None
1414 # list of missing changeset to fetch remotely
1422 # list of missing changeset to fetch remotely
1415 self.fetch = None
1423 self.fetch = None
1416 # remote bookmarks data
1424 # remote bookmarks data
1417 self.remotebookmarks = remotebookmarks
1425 self.remotebookmarks = remotebookmarks
1418 # result of changegroup pulling (used as return code by pull)
1426 # result of changegroup pulling (used as return code by pull)
1419 self.cgresult = None
1427 self.cgresult = None
1420 # list of step already done
1428 # list of step already done
1421 self.stepsdone = set()
1429 self.stepsdone = set()
1422 # Whether we attempted a clone from pre-generated bundles.
1430 # Whether we attempted a clone from pre-generated bundles.
1423 self.clonebundleattempted = False
1431 self.clonebundleattempted = False
1424 # Set of file patterns to include.
1432 # Set of file patterns to include.
1425 self.includepats = includepats
1433 self.includepats = includepats
1426 # Set of file patterns to exclude.
1434 # Set of file patterns to exclude.
1427 self.excludepats = excludepats
1435 self.excludepats = excludepats
1428 # Number of ancestor changesets to pull from each pulled head.
1436 # Number of ancestor changesets to pull from each pulled head.
1429 self.depth = depth
1437 self.depth = depth
1430
1438
1431 @util.propertycache
1439 @util.propertycache
1432 def pulledsubset(self):
1440 def pulledsubset(self):
1433 """heads of the set of changeset target by the pull"""
1441 """heads of the set of changeset target by the pull"""
1434 # compute target subset
1442 # compute target subset
1435 if self.heads is None:
1443 if self.heads is None:
1436 # We pulled every thing possible
1444 # We pulled every thing possible
1437 # sync on everything common
1445 # sync on everything common
1438 c = set(self.common)
1446 c = set(self.common)
1439 ret = list(self.common)
1447 ret = list(self.common)
1440 for n in self.rheads:
1448 for n in self.rheads:
1441 if n not in c:
1449 if n not in c:
1442 ret.append(n)
1450 ret.append(n)
1443 return ret
1451 return ret
1444 else:
1452 else:
1445 # We pulled a specific subset
1453 # We pulled a specific subset
1446 # sync on this subset
1454 # sync on this subset
1447 return self.heads
1455 return self.heads
1448
1456
1449 @util.propertycache
1457 @util.propertycache
1450 def canusebundle2(self):
1458 def canusebundle2(self):
1451 return not _forcebundle1(self)
1459 return not _forcebundle1(self)
1452
1460
1453 @util.propertycache
1461 @util.propertycache
1454 def remotebundle2caps(self):
1462 def remotebundle2caps(self):
1455 return bundle2.bundle2caps(self.remote)
1463 return bundle2.bundle2caps(self.remote)
1456
1464
1457 def gettransaction(self):
1465 def gettransaction(self):
1458 # deprecated; talk to trmanager directly
1466 # deprecated; talk to trmanager directly
1459 return self.trmanager.transaction()
1467 return self.trmanager.transaction()
1460
1468
1461
1469
1462 class transactionmanager(util.transactional):
1470 class transactionmanager(util.transactional):
1463 """An object to manage the life cycle of a transaction
1471 """An object to manage the life cycle of a transaction
1464
1472
1465 It creates the transaction on demand and calls the appropriate hooks when
1473 It creates the transaction on demand and calls the appropriate hooks when
1466 closing the transaction."""
1474 closing the transaction."""
1467
1475
1468 def __init__(self, repo, source, url):
1476 def __init__(self, repo, source, url):
1469 self.repo = repo
1477 self.repo = repo
1470 self.source = source
1478 self.source = source
1471 self.url = url
1479 self.url = url
1472 self._tr = None
1480 self._tr = None
1473
1481
1474 def transaction(self):
1482 def transaction(self):
1475 """Return an open transaction object, constructing if necessary"""
1483 """Return an open transaction object, constructing if necessary"""
1476 if not self._tr:
1484 if not self._tr:
1477 trname = b'%s\n%s' % (self.source, urlutil.hidepassword(self.url))
1485 trname = b'%s\n%s' % (self.source, urlutil.hidepassword(self.url))
1478 self._tr = self.repo.transaction(trname)
1486 self._tr = self.repo.transaction(trname)
1479 self._tr.hookargs[b'source'] = self.source
1487 self._tr.hookargs[b'source'] = self.source
1480 self._tr.hookargs[b'url'] = self.url
1488 self._tr.hookargs[b'url'] = self.url
1481 return self._tr
1489 return self._tr
1482
1490
1483 def close(self):
1491 def close(self):
1484 """close transaction if created"""
1492 """close transaction if created"""
1485 if self._tr is not None:
1493 if self._tr is not None:
1486 self._tr.close()
1494 self._tr.close()
1487
1495
1488 def release(self):
1496 def release(self):
1489 """release transaction if created"""
1497 """release transaction if created"""
1490 if self._tr is not None:
1498 if self._tr is not None:
1491 self._tr.release()
1499 self._tr.release()
1492
1500
1493
1501
1494 def listkeys(remote, namespace):
1502 def listkeys(remote, namespace):
1495 with remote.commandexecutor() as e:
1503 with remote.commandexecutor() as e:
1496 return e.callcommand(b'listkeys', {b'namespace': namespace}).result()
1504 return e.callcommand(b'listkeys', {b'namespace': namespace}).result()
1497
1505
1498
1506
1499 def _fullpullbundle2(repo, pullop):
1507 def _fullpullbundle2(repo, pullop):
1500 # The server may send a partial reply, i.e. when inlining
1508 # The server may send a partial reply, i.e. when inlining
1501 # pre-computed bundles. In that case, update the common
1509 # pre-computed bundles. In that case, update the common
1502 # set based on the results and pull another bundle.
1510 # set based on the results and pull another bundle.
1503 #
1511 #
1504 # There are two indicators that the process is finished:
1512 # There are two indicators that the process is finished:
1505 # - no changeset has been added, or
1513 # - no changeset has been added, or
1506 # - all remote heads are known locally.
1514 # - all remote heads are known locally.
1507 # The head check must use the unfiltered view as obsoletion
1515 # The head check must use the unfiltered view as obsoletion
1508 # markers can hide heads.
1516 # markers can hide heads.
1509 unfi = repo.unfiltered()
1517 unfi = repo.unfiltered()
1510 unficl = unfi.changelog
1518 unficl = unfi.changelog
1511
1519
1512 def headsofdiff(h1, h2):
1520 def headsofdiff(h1, h2):
1513 """Returns heads(h1 % h2)"""
1521 """Returns heads(h1 % h2)"""
1514 res = unfi.set(b'heads(%ln %% %ln)', h1, h2)
1522 res = unfi.set(b'heads(%ln %% %ln)', h1, h2)
1515 return {ctx.node() for ctx in res}
1523 return {ctx.node() for ctx in res}
1516
1524
1517 def headsofunion(h1, h2):
1525 def headsofunion(h1, h2):
1518 """Returns heads((h1 + h2) - null)"""
1526 """Returns heads((h1 + h2) - null)"""
1519 res = unfi.set(b'heads((%ln + %ln - null))', h1, h2)
1527 res = unfi.set(b'heads((%ln + %ln - null))', h1, h2)
1520 return {ctx.node() for ctx in res}
1528 return {ctx.node() for ctx in res}
1521
1529
1522 while True:
1530 while True:
1523 old_heads = unficl.heads()
1531 old_heads = unficl.heads()
1524 clstart = len(unficl)
1532 clstart = len(unficl)
1525 _pullbundle2(pullop)
1533 _pullbundle2(pullop)
1526 if requirements.NARROW_REQUIREMENT in repo.requirements:
1534 if requirements.NARROW_REQUIREMENT in repo.requirements:
1527 # XXX narrow clones filter the heads on the server side during
1535 # XXX narrow clones filter the heads on the server side during
1528 # XXX getbundle and result in partial replies as well.
1536 # XXX getbundle and result in partial replies as well.
1529 # XXX Disable pull bundles in this case as band aid to avoid
1537 # XXX Disable pull bundles in this case as band aid to avoid
1530 # XXX extra round trips.
1538 # XXX extra round trips.
1531 break
1539 break
1532 if clstart == len(unficl):
1540 if clstart == len(unficl):
1533 break
1541 break
1534 if all(unficl.hasnode(n) for n in pullop.rheads):
1542 if all(unficl.hasnode(n) for n in pullop.rheads):
1535 break
1543 break
1536 new_heads = headsofdiff(unficl.heads(), old_heads)
1544 new_heads = headsofdiff(unficl.heads(), old_heads)
1537 pullop.common = headsofunion(new_heads, pullop.common)
1545 pullop.common = headsofunion(new_heads, pullop.common)
1538 pullop.rheads = set(pullop.rheads) - pullop.common
1546 pullop.rheads = set(pullop.rheads) - pullop.common
1539
1547
1540
1548
1541 def add_confirm_callback(repo, pullop):
1549 def add_confirm_callback(repo, pullop):
1542 """adds a finalize callback to transaction which can be used to show stats
1550 """adds a finalize callback to transaction which can be used to show stats
1543 to user and confirm the pull before committing transaction"""
1551 to user and confirm the pull before committing transaction"""
1544
1552
1545 tr = pullop.trmanager.transaction()
1553 tr = pullop.trmanager.transaction()
1546 scmutil.registersummarycallback(
1554 scmutil.registersummarycallback(
1547 repo, tr, txnname=b'pull', as_validator=True
1555 repo, tr, txnname=b'pull', as_validator=True
1548 )
1556 )
1549 reporef = weakref.ref(repo.unfiltered())
1557 reporef = weakref.ref(repo.unfiltered())
1550
1558
1551 def prompt(tr):
1559 def prompt(tr):
1552 repo = reporef()
1560 repo = reporef()
1553 cm = _(b'accept incoming changes (yn)?$$ &Yes $$ &No')
1561 cm = _(b'accept incoming changes (yn)?$$ &Yes $$ &No')
1554 if repo.ui.promptchoice(cm):
1562 if repo.ui.promptchoice(cm):
1555 raise error.Abort(b"user aborted")
1563 raise error.Abort(b"user aborted")
1556
1564
1557 tr.addvalidator(b'900-pull-prompt', prompt)
1565 tr.addvalidator(b'900-pull-prompt', prompt)
1558
1566
1559
1567
1560 def pull(
1568 def pull(
1561 repo,
1569 repo,
1562 remote,
1570 remote,
1563 path=None,
1571 path=None,
1564 heads=None,
1572 heads=None,
1565 force=False,
1573 force=False,
1566 bookmarks=(),
1574 bookmarks=(),
1567 opargs=None,
1575 opargs=None,
1568 streamclonerequested=None,
1576 streamclonerequested=None,
1569 includepats=None,
1577 includepats=None,
1570 excludepats=None,
1578 excludepats=None,
1571 depth=None,
1579 depth=None,
1572 confirm=None,
1580 confirm=None,
1573 ):
1581 ):
1574 """Fetch repository data from a remote.
1582 """Fetch repository data from a remote.
1575
1583
1576 This is the main function used to retrieve data from a remote repository.
1584 This is the main function used to retrieve data from a remote repository.
1577
1585
1578 ``repo`` is the local repository to clone into.
1586 ``repo`` is the local repository to clone into.
1579 ``remote`` is a peer instance.
1587 ``remote`` is a peer instance.
1580 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1588 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1581 default) means to pull everything from the remote.
1589 default) means to pull everything from the remote.
1582 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1590 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1583 default, all remote bookmarks are pulled.
1591 default, all remote bookmarks are pulled.
1584 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1592 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1585 initialization.
1593 initialization.
1586 ``streamclonerequested`` is a boolean indicating whether a "streaming
1594 ``streamclonerequested`` is a boolean indicating whether a "streaming
1587 clone" is requested. A "streaming clone" is essentially a raw file copy
1595 clone" is requested. A "streaming clone" is essentially a raw file copy
1588 of revlogs from the server. This only works when the local repository is
1596 of revlogs from the server. This only works when the local repository is
1589 empty. The default value of ``None`` means to respect the server
1597 empty. The default value of ``None`` means to respect the server
1590 configuration for preferring stream clones.
1598 configuration for preferring stream clones.
1591 ``includepats`` and ``excludepats`` define explicit file patterns to
1599 ``includepats`` and ``excludepats`` define explicit file patterns to
1592 include and exclude in storage, respectively. If not defined, narrow
1600 include and exclude in storage, respectively. If not defined, narrow
1593 patterns from the repo instance are used, if available.
1601 patterns from the repo instance are used, if available.
1594 ``depth`` is an integer indicating the DAG depth of history we're
1602 ``depth`` is an integer indicating the DAG depth of history we're
1595 interested in. If defined, for each revision specified in ``heads``, we
1603 interested in. If defined, for each revision specified in ``heads``, we
1596 will fetch up to this many of its ancestors and data associated with them.
1604 will fetch up to this many of its ancestors and data associated with them.
1597 ``confirm`` is a boolean indicating whether the pull should be confirmed
1605 ``confirm`` is a boolean indicating whether the pull should be confirmed
1598 before committing the transaction. This overrides HGPLAIN.
1606 before committing the transaction. This overrides HGPLAIN.
1599
1607
1600 Returns the ``pulloperation`` created for this pull.
1608 Returns the ``pulloperation`` created for this pull.
1601 """
1609 """
1602 if opargs is None:
1610 if opargs is None:
1603 opargs = {}
1611 opargs = {}
1604
1612
1605 # We allow the narrow patterns to be passed in explicitly to provide more
1613 # We allow the narrow patterns to be passed in explicitly to provide more
1606 # flexibility for API consumers.
1614 # flexibility for API consumers.
1607 if includepats or excludepats:
1615 if includepats or excludepats:
1608 includepats = includepats or set()
1616 includepats = includepats or set()
1609 excludepats = excludepats or set()
1617 excludepats = excludepats or set()
1610 else:
1618 else:
1611 includepats, excludepats = repo.narrowpats
1619 includepats, excludepats = repo.narrowpats
1612
1620
1613 narrowspec.validatepatterns(includepats)
1621 narrowspec.validatepatterns(includepats)
1614 narrowspec.validatepatterns(excludepats)
1622 narrowspec.validatepatterns(excludepats)
1615
1623
1616 pullop = pulloperation(
1624 pullop = pulloperation(
1617 repo,
1625 repo,
1618 remote,
1626 remote,
1619 path=path,
1627 path=path,
1620 heads=heads,
1628 heads=heads,
1621 force=force,
1629 force=force,
1622 bookmarks=bookmarks,
1630 bookmarks=bookmarks,
1623 streamclonerequested=streamclonerequested,
1631 streamclonerequested=streamclonerequested,
1624 includepats=includepats,
1632 includepats=includepats,
1625 excludepats=excludepats,
1633 excludepats=excludepats,
1626 depth=depth,
1634 depth=depth,
1627 **pycompat.strkwargs(opargs)
1635 **pycompat.strkwargs(opargs)
1628 )
1636 )
1629
1637
1630 peerlocal = pullop.remote.local()
1638 peerlocal = pullop.remote.local()
1631 if peerlocal:
1639 if peerlocal:
1632 missing = set(peerlocal.requirements) - pullop.repo.supported
1640 missing = set(peerlocal.requirements) - pullop.repo.supported
1633 if missing:
1641 if missing:
1634 msg = _(
1642 msg = _(
1635 b"required features are not"
1643 b"required features are not"
1636 b" supported in the destination:"
1644 b" supported in the destination:"
1637 b" %s"
1645 b" %s"
1638 ) % (b', '.join(sorted(missing)))
1646 ) % (b', '.join(sorted(missing)))
1639 raise error.Abort(msg)
1647 raise error.Abort(msg)
1640
1648
1641 for category in repo._wanted_sidedata:
1649 for category in repo._wanted_sidedata:
1642 # Check that a computer is registered for that category for at least
1650 # Check that a computer is registered for that category for at least
1643 # one revlog kind.
1651 # one revlog kind.
1644 for kind, computers in repo._sidedata_computers.items():
1652 for kind, computers in repo._sidedata_computers.items():
1645 if computers.get(category):
1653 if computers.get(category):
1646 break
1654 break
1647 else:
1655 else:
1648 # This should never happen since repos are supposed to be able to
1656 # This should never happen since repos are supposed to be able to
1649 # generate the sidedata they require.
1657 # generate the sidedata they require.
1650 raise error.ProgrammingError(
1658 raise error.ProgrammingError(
1651 _(
1659 _(
1652 b'sidedata category requested by local side without local'
1660 b'sidedata category requested by local side without local'
1653 b"support: '%s'"
1661 b"support: '%s'"
1654 )
1662 )
1655 % pycompat.bytestr(category)
1663 % pycompat.bytestr(category)
1656 )
1664 )
1657
1665
1658 pullop.trmanager = transactionmanager(repo, b'pull', remote.url())
1666 pullop.trmanager = transactionmanager(repo, b'pull', remote.url())
1659 wlock = util.nullcontextmanager()
1667 wlock = util.nullcontextmanager()
1660 if not bookmod.bookmarksinstore(repo):
1668 if not bookmod.bookmarksinstore(repo):
1661 wlock = repo.wlock()
1669 wlock = repo.wlock()
1662 with wlock, repo.lock(), pullop.trmanager:
1670 with wlock, repo.lock(), pullop.trmanager:
1663 if confirm or (
1671 if confirm or (
1664 repo.ui.configbool(b"pull", b"confirm") and not repo.ui.plain()
1672 repo.ui.configbool(b"pull", b"confirm") and not repo.ui.plain()
1665 ):
1673 ):
1666 add_confirm_callback(repo, pullop)
1674 add_confirm_callback(repo, pullop)
1667
1675
1668 # This should ideally be in _pullbundle2(). However, it needs to run
1676 # This should ideally be in _pullbundle2(). However, it needs to run
1669 # before discovery to avoid extra work.
1677 # before discovery to avoid extra work.
1670 _maybeapplyclonebundle(pullop)
1678 _maybeapplyclonebundle(pullop)
1671 streamclone.maybeperformlegacystreamclone(pullop)
1679 streamclone.maybeperformlegacystreamclone(pullop)
1672 _pulldiscovery(pullop)
1680 _pulldiscovery(pullop)
1673 if pullop.canusebundle2:
1681 if pullop.canusebundle2:
1674 _fullpullbundle2(repo, pullop)
1682 _fullpullbundle2(repo, pullop)
1675 _pullchangeset(pullop)
1683 _pullchangeset(pullop)
1676 _pullphase(pullop)
1684 _pullphase(pullop)
1677 _pullbookmarks(pullop)
1685 _pullbookmarks(pullop)
1678 _pullobsolete(pullop)
1686 _pullobsolete(pullop)
1679
1687
1680 # storing remotenames
1688 # storing remotenames
1681 if repo.ui.configbool(b'experimental', b'remotenames'):
1689 if repo.ui.configbool(b'experimental', b'remotenames'):
1682 logexchange.pullremotenames(repo, remote)
1690 logexchange.pullremotenames(repo, remote)
1683
1691
1684 return pullop
1692 return pullop
1685
1693
1686
1694
1687 # list of steps to perform discovery before pull
1695 # list of steps to perform discovery before pull
1688 pulldiscoveryorder = []
1696 pulldiscoveryorder = []
1689
1697
1690 # Mapping between step name and function
1698 # Mapping between step name and function
1691 #
1699 #
1692 # This exists to help extensions wrap steps if necessary
1700 # This exists to help extensions wrap steps if necessary
1693 pulldiscoverymapping = {}
1701 pulldiscoverymapping = {}
1694
1702
1695
1703
1696 def pulldiscovery(stepname):
1704 def pulldiscovery(stepname):
1697 """decorator for function performing discovery before pull
1705 """decorator for function performing discovery before pull
1698
1706
1699 The function is added to the step -> function mapping and appended to the
1707 The function is added to the step -> function mapping and appended to the
1700 list of steps. Beware that decorated function will be added in order (this
1708 list of steps. Beware that decorated function will be added in order (this
1701 may matter).
1709 may matter).
1702
1710
1703 You can only use this decorator for a new step, if you want to wrap a step
1711 You can only use this decorator for a new step, if you want to wrap a step
1704 from an extension, change the pulldiscovery dictionary directly."""
1712 from an extension, change the pulldiscovery dictionary directly."""
1705
1713
1706 def dec(func):
1714 def dec(func):
1707 assert stepname not in pulldiscoverymapping
1715 assert stepname not in pulldiscoverymapping
1708 pulldiscoverymapping[stepname] = func
1716 pulldiscoverymapping[stepname] = func
1709 pulldiscoveryorder.append(stepname)
1717 pulldiscoveryorder.append(stepname)
1710 return func
1718 return func
1711
1719
1712 return dec
1720 return dec
1713
1721
1714
1722
1715 def _pulldiscovery(pullop):
1723 def _pulldiscovery(pullop):
1716 """Run all discovery steps"""
1724 """Run all discovery steps"""
1717 for stepname in pulldiscoveryorder:
1725 for stepname in pulldiscoveryorder:
1718 step = pulldiscoverymapping[stepname]
1726 step = pulldiscoverymapping[stepname]
1719 step(pullop)
1727 step(pullop)
1720
1728
1721
1729
1722 @pulldiscovery(b'b1:bookmarks')
1730 @pulldiscovery(b'b1:bookmarks')
1723 def _pullbookmarkbundle1(pullop):
1731 def _pullbookmarkbundle1(pullop):
1724 """fetch bookmark data in bundle1 case
1732 """fetch bookmark data in bundle1 case
1725
1733
1726 If not using bundle2, we have to fetch bookmarks before changeset
1734 If not using bundle2, we have to fetch bookmarks before changeset
1727 discovery to reduce the chance and impact of race conditions."""
1735 discovery to reduce the chance and impact of race conditions."""
1728 if pullop.remotebookmarks is not None:
1736 if pullop.remotebookmarks is not None:
1729 return
1737 return
1730 if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps:
1738 if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps:
1731 # all known bundle2 servers now support listkeys, but lets be nice with
1739 # all known bundle2 servers now support listkeys, but lets be nice with
1732 # new implementation.
1740 # new implementation.
1733 return
1741 return
1734 books = listkeys(pullop.remote, b'bookmarks')
1742 books = listkeys(pullop.remote, b'bookmarks')
1735 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1743 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1736
1744
1737
1745
1738 @pulldiscovery(b'changegroup')
1746 @pulldiscovery(b'changegroup')
1739 def _pulldiscoverychangegroup(pullop):
1747 def _pulldiscoverychangegroup(pullop):
1740 """discovery phase for the pull
1748 """discovery phase for the pull
1741
1749
1742 Current handle changeset discovery only, will change handle all discovery
1750 Current handle changeset discovery only, will change handle all discovery
1743 at some point."""
1751 at some point."""
1744 tmp = discovery.findcommonincoming(
1752 tmp = discovery.findcommonincoming(
1745 pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force
1753 pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force
1746 )
1754 )
1747 common, fetch, rheads = tmp
1755 common, fetch, rheads = tmp
1748 has_node = pullop.repo.unfiltered().changelog.index.has_node
1756 has_node = pullop.repo.unfiltered().changelog.index.has_node
1749 if fetch and rheads:
1757 if fetch and rheads:
1750 # If a remote heads is filtered locally, put in back in common.
1758 # If a remote heads is filtered locally, put in back in common.
1751 #
1759 #
1752 # This is a hackish solution to catch most of "common but locally
1760 # This is a hackish solution to catch most of "common but locally
1753 # hidden situation". We do not performs discovery on unfiltered
1761 # hidden situation". We do not performs discovery on unfiltered
1754 # repository because it end up doing a pathological amount of round
1762 # repository because it end up doing a pathological amount of round
1755 # trip for w huge amount of changeset we do not care about.
1763 # trip for w huge amount of changeset we do not care about.
1756 #
1764 #
1757 # If a set of such "common but filtered" changeset exist on the server
1765 # If a set of such "common but filtered" changeset exist on the server
1758 # but are not including a remote heads, we'll not be able to detect it,
1766 # but are not including a remote heads, we'll not be able to detect it,
1759 scommon = set(common)
1767 scommon = set(common)
1760 for n in rheads:
1768 for n in rheads:
1761 if has_node(n):
1769 if has_node(n):
1762 if n not in scommon:
1770 if n not in scommon:
1763 common.append(n)
1771 common.append(n)
1764 if set(rheads).issubset(set(common)):
1772 if set(rheads).issubset(set(common)):
1765 fetch = []
1773 fetch = []
1766 pullop.common = common
1774 pullop.common = common
1767 pullop.fetch = fetch
1775 pullop.fetch = fetch
1768 pullop.rheads = rheads
1776 pullop.rheads = rheads
1769
1777
1770
1778
1771 def _pullbundle2(pullop):
1779 def _pullbundle2(pullop):
1772 """pull data using bundle2
1780 """pull data using bundle2
1773
1781
1774 For now, the only supported data are changegroup."""
1782 For now, the only supported data are changegroup."""
1775 kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')}
1783 kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')}
1776
1784
1777 # make ui easier to access
1785 # make ui easier to access
1778 ui = pullop.repo.ui
1786 ui = pullop.repo.ui
1779
1787
1780 # At the moment we don't do stream clones over bundle2. If that is
1788 # At the moment we don't do stream clones over bundle2. If that is
1781 # implemented then here's where the check for that will go.
1789 # implemented then here's where the check for that will go.
1782 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1790 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1783
1791
1784 # declare pull perimeters
1792 # declare pull perimeters
1785 kwargs[b'common'] = pullop.common
1793 kwargs[b'common'] = pullop.common
1786 kwargs[b'heads'] = pullop.heads or pullop.rheads
1794 kwargs[b'heads'] = pullop.heads or pullop.rheads
1787
1795
1788 # check server supports narrow and then adding includepats and excludepats
1796 # check server supports narrow and then adding includepats and excludepats
1789 servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP)
1797 servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP)
1790 if servernarrow and pullop.includepats:
1798 if servernarrow and pullop.includepats:
1791 kwargs[b'includepats'] = pullop.includepats
1799 kwargs[b'includepats'] = pullop.includepats
1792 if servernarrow and pullop.excludepats:
1800 if servernarrow and pullop.excludepats:
1793 kwargs[b'excludepats'] = pullop.excludepats
1801 kwargs[b'excludepats'] = pullop.excludepats
1794
1802
1795 if streaming:
1803 if streaming:
1796 kwargs[b'cg'] = False
1804 kwargs[b'cg'] = False
1797 kwargs[b'stream'] = True
1805 kwargs[b'stream'] = True
1798 pullop.stepsdone.add(b'changegroup')
1806 pullop.stepsdone.add(b'changegroup')
1799 pullop.stepsdone.add(b'phases')
1807 pullop.stepsdone.add(b'phases')
1800
1808
1801 else:
1809 else:
1802 # pulling changegroup
1810 # pulling changegroup
1803 pullop.stepsdone.add(b'changegroup')
1811 pullop.stepsdone.add(b'changegroup')
1804
1812
1805 kwargs[b'cg'] = pullop.fetch
1813 kwargs[b'cg'] = pullop.fetch
1806
1814
1807 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1815 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1808 hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ())
1816 hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ())
1809 if not legacyphase and hasbinaryphase:
1817 if not legacyphase and hasbinaryphase:
1810 kwargs[b'phases'] = True
1818 kwargs[b'phases'] = True
1811 pullop.stepsdone.add(b'phases')
1819 pullop.stepsdone.add(b'phases')
1812
1820
1813 if b'listkeys' in pullop.remotebundle2caps:
1821 if b'listkeys' in pullop.remotebundle2caps:
1814 if b'phases' not in pullop.stepsdone:
1822 if b'phases' not in pullop.stepsdone:
1815 kwargs[b'listkeys'] = [b'phases']
1823 kwargs[b'listkeys'] = [b'phases']
1816
1824
1817 bookmarksrequested = False
1825 bookmarksrequested = False
1818 legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange')
1826 legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange')
1819 hasbinarybook = b'bookmarks' in pullop.remotebundle2caps
1827 hasbinarybook = b'bookmarks' in pullop.remotebundle2caps
1820
1828
1821 if pullop.remotebookmarks is not None:
1829 if pullop.remotebookmarks is not None:
1822 pullop.stepsdone.add(b'request-bookmarks')
1830 pullop.stepsdone.add(b'request-bookmarks')
1823
1831
1824 if (
1832 if (
1825 b'request-bookmarks' not in pullop.stepsdone
1833 b'request-bookmarks' not in pullop.stepsdone
1826 and pullop.remotebookmarks is None
1834 and pullop.remotebookmarks is None
1827 and not legacybookmark
1835 and not legacybookmark
1828 and hasbinarybook
1836 and hasbinarybook
1829 ):
1837 ):
1830 kwargs[b'bookmarks'] = True
1838 kwargs[b'bookmarks'] = True
1831 bookmarksrequested = True
1839 bookmarksrequested = True
1832
1840
1833 if b'listkeys' in pullop.remotebundle2caps:
1841 if b'listkeys' in pullop.remotebundle2caps:
1834 if b'request-bookmarks' not in pullop.stepsdone:
1842 if b'request-bookmarks' not in pullop.stepsdone:
1835 # make sure to always includes bookmark data when migrating
1843 # make sure to always includes bookmark data when migrating
1836 # `hg incoming --bundle` to using this function.
1844 # `hg incoming --bundle` to using this function.
1837 pullop.stepsdone.add(b'request-bookmarks')
1845 pullop.stepsdone.add(b'request-bookmarks')
1838 kwargs.setdefault(b'listkeys', []).append(b'bookmarks')
1846 kwargs.setdefault(b'listkeys', []).append(b'bookmarks')
1839
1847
1840 # If this is a full pull / clone and the server supports the clone bundles
1848 # If this is a full pull / clone and the server supports the clone bundles
1841 # feature, tell the server whether we attempted a clone bundle. The
1849 # feature, tell the server whether we attempted a clone bundle. The
1842 # presence of this flag indicates the client supports clone bundles. This
1850 # presence of this flag indicates the client supports clone bundles. This
1843 # will enable the server to treat clients that support clone bundles
1851 # will enable the server to treat clients that support clone bundles
1844 # differently from those that don't.
1852 # differently from those that don't.
1845 if (
1853 if (
1846 pullop.remote.capable(b'clonebundles')
1854 pullop.remote.capable(b'clonebundles')
1847 and pullop.heads is None
1855 and pullop.heads is None
1848 and list(pullop.common) == [pullop.repo.nullid]
1856 and list(pullop.common) == [pullop.repo.nullid]
1849 ):
1857 ):
1850 kwargs[b'cbattempted'] = pullop.clonebundleattempted
1858 kwargs[b'cbattempted'] = pullop.clonebundleattempted
1851
1859
1852 if streaming:
1860 if streaming:
1853 pullop.repo.ui.status(_(b'streaming all changes\n'))
1861 pullop.repo.ui.status(_(b'streaming all changes\n'))
1854 elif not pullop.fetch:
1862 elif not pullop.fetch:
1855 pullop.repo.ui.status(_(b"no changes found\n"))
1863 pullop.repo.ui.status(_(b"no changes found\n"))
1856 pullop.cgresult = 0
1864 pullop.cgresult = 0
1857 else:
1865 else:
1858 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
1866 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
1859 pullop.repo.ui.status(_(b"requesting all changes\n"))
1867 pullop.repo.ui.status(_(b"requesting all changes\n"))
1860 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1868 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1861 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1869 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1862 if obsolete.commonversion(remoteversions) is not None:
1870 if obsolete.commonversion(remoteversions) is not None:
1863 kwargs[b'obsmarkers'] = True
1871 kwargs[b'obsmarkers'] = True
1864 pullop.stepsdone.add(b'obsmarkers')
1872 pullop.stepsdone.add(b'obsmarkers')
1865 _pullbundle2extraprepare(pullop, kwargs)
1873 _pullbundle2extraprepare(pullop, kwargs)
1866
1874
1867 remote_sidedata = bundle2.read_remote_wanted_sidedata(pullop.remote)
1875 remote_sidedata = bundle2.read_remote_wanted_sidedata(pullop.remote)
1868 if remote_sidedata:
1876 if remote_sidedata:
1869 kwargs[b'remote_sidedata'] = remote_sidedata
1877 kwargs[b'remote_sidedata'] = remote_sidedata
1870
1878
1871 with pullop.remote.commandexecutor() as e:
1879 with pullop.remote.commandexecutor() as e:
1872 args = dict(kwargs)
1880 args = dict(kwargs)
1873 args[b'source'] = b'pull'
1881 args[b'source'] = b'pull'
1874 bundle = e.callcommand(b'getbundle', args).result()
1882 bundle = e.callcommand(b'getbundle', args).result()
1875
1883
1876 try:
1884 try:
1877 op = bundle2.bundleoperation(
1885 op = bundle2.bundleoperation(
1878 pullop.repo, pullop.gettransaction, source=b'pull'
1886 pullop.repo, pullop.gettransaction, source=b'pull'
1879 )
1887 )
1880 op.modes[b'bookmarks'] = b'records'
1888 op.modes[b'bookmarks'] = b'records'
1881 bundle2.processbundle(pullop.repo, bundle, op=op)
1889 bundle2.processbundle(pullop.repo, bundle, op=op)
1882 except bundle2.AbortFromPart as exc:
1890 except bundle2.AbortFromPart as exc:
1883 pullop.repo.ui.error(_(b'remote: abort: %s\n') % exc)
1891 pullop.repo.ui.error(_(b'remote: abort: %s\n') % exc)
1884 raise error.RemoteError(_(b'pull failed on remote'), hint=exc.hint)
1892 raise error.RemoteError(_(b'pull failed on remote'), hint=exc.hint)
1885 except error.BundleValueError as exc:
1893 except error.BundleValueError as exc:
1886 raise error.RemoteError(_(b'missing support for %s') % exc)
1894 raise error.RemoteError(_(b'missing support for %s') % exc)
1887
1895
1888 if pullop.fetch:
1896 if pullop.fetch:
1889 pullop.cgresult = bundle2.combinechangegroupresults(op)
1897 pullop.cgresult = bundle2.combinechangegroupresults(op)
1890
1898
1891 # processing phases change
1899 # processing phases change
1892 for namespace, value in op.records[b'listkeys']:
1900 for namespace, value in op.records[b'listkeys']:
1893 if namespace == b'phases':
1901 if namespace == b'phases':
1894 _pullapplyphases(pullop, value)
1902 _pullapplyphases(pullop, value)
1895
1903
1896 # processing bookmark update
1904 # processing bookmark update
1897 if bookmarksrequested:
1905 if bookmarksrequested:
1898 books = {}
1906 books = {}
1899 for record in op.records[b'bookmarks']:
1907 for record in op.records[b'bookmarks']:
1900 books[record[b'bookmark']] = record[b"node"]
1908 books[record[b'bookmark']] = record[b"node"]
1901 pullop.remotebookmarks = books
1909 pullop.remotebookmarks = books
1902 else:
1910 else:
1903 for namespace, value in op.records[b'listkeys']:
1911 for namespace, value in op.records[b'listkeys']:
1904 if namespace == b'bookmarks':
1912 if namespace == b'bookmarks':
1905 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1913 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1906
1914
1907 # bookmark data were either already there or pulled in the bundle
1915 # bookmark data were either already there or pulled in the bundle
1908 if pullop.remotebookmarks is not None:
1916 if pullop.remotebookmarks is not None:
1909 _pullbookmarks(pullop)
1917 _pullbookmarks(pullop)
1910
1918
1911
1919
1912 def _pullbundle2extraprepare(pullop, kwargs):
1920 def _pullbundle2extraprepare(pullop, kwargs):
1913 """hook function so that extensions can extend the getbundle call"""
1921 """hook function so that extensions can extend the getbundle call"""
1914
1922
1915
1923
1916 def _pullchangeset(pullop):
1924 def _pullchangeset(pullop):
1917 """pull changeset from unbundle into the local repo"""
1925 """pull changeset from unbundle into the local repo"""
1918 # We delay the open of the transaction as late as possible so we
1926 # We delay the open of the transaction as late as possible so we
1919 # don't open transaction for nothing or you break future useful
1927 # don't open transaction for nothing or you break future useful
1920 # rollback call
1928 # rollback call
1921 if b'changegroup' in pullop.stepsdone:
1929 if b'changegroup' in pullop.stepsdone:
1922 return
1930 return
1923 pullop.stepsdone.add(b'changegroup')
1931 pullop.stepsdone.add(b'changegroup')
1924 if not pullop.fetch:
1932 if not pullop.fetch:
1925 pullop.repo.ui.status(_(b"no changes found\n"))
1933 pullop.repo.ui.status(_(b"no changes found\n"))
1926 pullop.cgresult = 0
1934 pullop.cgresult = 0
1927 return
1935 return
1928 tr = pullop.gettransaction()
1936 tr = pullop.gettransaction()
1929 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
1937 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
1930 pullop.repo.ui.status(_(b"requesting all changes\n"))
1938 pullop.repo.ui.status(_(b"requesting all changes\n"))
1931 elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'):
1939 elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'):
1932 # issue1320, avoid a race if remote changed after discovery
1940 # issue1320, avoid a race if remote changed after discovery
1933 pullop.heads = pullop.rheads
1941 pullop.heads = pullop.rheads
1934
1942
1935 if pullop.remote.capable(b'getbundle'):
1943 if pullop.remote.capable(b'getbundle'):
1936 # TODO: get bundlecaps from remote
1944 # TODO: get bundlecaps from remote
1937 cg = pullop.remote.getbundle(
1945 cg = pullop.remote.getbundle(
1938 b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads
1946 b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads
1939 )
1947 )
1940 elif pullop.heads is None:
1948 elif pullop.heads is None:
1941 with pullop.remote.commandexecutor() as e:
1949 with pullop.remote.commandexecutor() as e:
1942 cg = e.callcommand(
1950 cg = e.callcommand(
1943 b'changegroup',
1951 b'changegroup',
1944 {
1952 {
1945 b'nodes': pullop.fetch,
1953 b'nodes': pullop.fetch,
1946 b'source': b'pull',
1954 b'source': b'pull',
1947 },
1955 },
1948 ).result()
1956 ).result()
1949
1957
1950 elif not pullop.remote.capable(b'changegroupsubset'):
1958 elif not pullop.remote.capable(b'changegroupsubset'):
1951 raise error.Abort(
1959 raise error.Abort(
1952 _(
1960 _(
1953 b"partial pull cannot be done because "
1961 b"partial pull cannot be done because "
1954 b"other repository doesn't support "
1962 b"other repository doesn't support "
1955 b"changegroupsubset."
1963 b"changegroupsubset."
1956 )
1964 )
1957 )
1965 )
1958 else:
1966 else:
1959 with pullop.remote.commandexecutor() as e:
1967 with pullop.remote.commandexecutor() as e:
1960 cg = e.callcommand(
1968 cg = e.callcommand(
1961 b'changegroupsubset',
1969 b'changegroupsubset',
1962 {
1970 {
1963 b'bases': pullop.fetch,
1971 b'bases': pullop.fetch,
1964 b'heads': pullop.heads,
1972 b'heads': pullop.heads,
1965 b'source': b'pull',
1973 b'source': b'pull',
1966 },
1974 },
1967 ).result()
1975 ).result()
1968
1976
1969 bundleop = bundle2.applybundle(
1977 bundleop = bundle2.applybundle(
1970 pullop.repo, cg, tr, b'pull', pullop.remote.url()
1978 pullop.repo, cg, tr, b'pull', pullop.remote.url()
1971 )
1979 )
1972 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1980 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1973
1981
1974
1982
1975 def _pullphase(pullop):
1983 def _pullphase(pullop):
1976 # Get remote phases data from remote
1984 # Get remote phases data from remote
1977 if b'phases' in pullop.stepsdone:
1985 if b'phases' in pullop.stepsdone:
1978 return
1986 return
1979 remotephases = listkeys(pullop.remote, b'phases')
1987 remotephases = listkeys(pullop.remote, b'phases')
1980 _pullapplyphases(pullop, remotephases)
1988 _pullapplyphases(pullop, remotephases)
1981
1989
1982
1990
1983 def _pullapplyphases(pullop, remotephases):
1991 def _pullapplyphases(pullop, remotephases):
1984 """apply phase movement from observed remote state"""
1992 """apply phase movement from observed remote state"""
1985 if b'phases' in pullop.stepsdone:
1993 if b'phases' in pullop.stepsdone:
1986 return
1994 return
1987 pullop.stepsdone.add(b'phases')
1995 pullop.stepsdone.add(b'phases')
1988 publishing = bool(remotephases.get(b'publishing', False))
1996 publishing = bool(remotephases.get(b'publishing', False))
1989 if remotephases and not publishing:
1997 if remotephases and not publishing:
1990 # remote is new and non-publishing
1998 # remote is new and non-publishing
1991 pheads, _dr = phases.analyzeremotephases(
1999 pheads, _dr = phases.analyzeremotephases(
1992 pullop.repo, pullop.pulledsubset, remotephases
2000 pullop.repo, pullop.pulledsubset, remotephases
1993 )
2001 )
1994 dheads = pullop.pulledsubset
2002 dheads = pullop.pulledsubset
1995 else:
2003 else:
1996 # Remote is old or publishing all common changesets
2004 # Remote is old or publishing all common changesets
1997 # should be seen as public
2005 # should be seen as public
1998 pheads = pullop.pulledsubset
2006 pheads = pullop.pulledsubset
1999 dheads = []
2007 dheads = []
2000 unfi = pullop.repo.unfiltered()
2008 unfi = pullop.repo.unfiltered()
2001 phase = unfi._phasecache.phase
2009 phase = unfi._phasecache.phase
2002 rev = unfi.changelog.index.get_rev
2010 rev = unfi.changelog.index.get_rev
2003 public = phases.public
2011 public = phases.public
2004 draft = phases.draft
2012 draft = phases.draft
2005
2013
2006 # exclude changesets already public locally and update the others
2014 # exclude changesets already public locally and update the others
2007 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
2015 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
2008 if pheads:
2016 if pheads:
2009 tr = pullop.gettransaction()
2017 tr = pullop.gettransaction()
2010 phases.advanceboundary(pullop.repo, tr, public, pheads)
2018 phases.advanceboundary(pullop.repo, tr, public, pheads)
2011
2019
2012 # exclude changesets already draft locally and update the others
2020 # exclude changesets already draft locally and update the others
2013 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
2021 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
2014 if dheads:
2022 if dheads:
2015 tr = pullop.gettransaction()
2023 tr = pullop.gettransaction()
2016 phases.advanceboundary(pullop.repo, tr, draft, dheads)
2024 phases.advanceboundary(pullop.repo, tr, draft, dheads)
2017
2025
2018
2026
2019 def _pullbookmarks(pullop):
2027 def _pullbookmarks(pullop):
2020 """process the remote bookmark information to update the local one"""
2028 """process the remote bookmark information to update the local one"""
2021 if b'bookmarks' in pullop.stepsdone:
2029 if b'bookmarks' in pullop.stepsdone:
2022 return
2030 return
2023 pullop.stepsdone.add(b'bookmarks')
2031 pullop.stepsdone.add(b'bookmarks')
2024 repo = pullop.repo
2032 repo = pullop.repo
2025 remotebookmarks = pullop.remotebookmarks
2033 remotebookmarks = pullop.remotebookmarks
2026 bookmarks_mode = None
2034 bookmarks_mode = None
2027 if pullop.remote_path is not None:
2035 if pullop.remote_path is not None:
2028 bookmarks_mode = pullop.remote_path.bookmarks_mode
2036 bookmarks_mode = pullop.remote_path.bookmarks_mode
2029 bookmod.updatefromremote(
2037 bookmod.updatefromremote(
2030 repo.ui,
2038 repo.ui,
2031 repo,
2039 repo,
2032 remotebookmarks,
2040 remotebookmarks,
2033 pullop.remote.url(),
2041 pullop.remote.url(),
2034 pullop.gettransaction,
2042 pullop.gettransaction,
2035 explicit=pullop.explicitbookmarks,
2043 explicit=pullop.explicitbookmarks,
2036 mode=bookmarks_mode,
2044 mode=bookmarks_mode,
2037 )
2045 )
2038
2046
2039
2047
2040 def _pullobsolete(pullop):
2048 def _pullobsolete(pullop):
2041 """utility function to pull obsolete markers from a remote
2049 """utility function to pull obsolete markers from a remote
2042
2050
2043 The `gettransaction` is function that return the pull transaction, creating
2051 The `gettransaction` is function that return the pull transaction, creating
2044 one if necessary. We return the transaction to inform the calling code that
2052 one if necessary. We return the transaction to inform the calling code that
2045 a new transaction have been created (when applicable).
2053 a new transaction have been created (when applicable).
2046
2054
2047 Exists mostly to allow overriding for experimentation purpose"""
2055 Exists mostly to allow overriding for experimentation purpose"""
2048 if b'obsmarkers' in pullop.stepsdone:
2056 if b'obsmarkers' in pullop.stepsdone:
2049 return
2057 return
2050 pullop.stepsdone.add(b'obsmarkers')
2058 pullop.stepsdone.add(b'obsmarkers')
2051 tr = None
2059 tr = None
2052 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
2060 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
2053 pullop.repo.ui.debug(b'fetching remote obsolete markers\n')
2061 pullop.repo.ui.debug(b'fetching remote obsolete markers\n')
2054 remoteobs = listkeys(pullop.remote, b'obsolete')
2062 remoteobs = listkeys(pullop.remote, b'obsolete')
2055 if b'dump0' in remoteobs:
2063 if b'dump0' in remoteobs:
2056 tr = pullop.gettransaction()
2064 tr = pullop.gettransaction()
2057 markers = []
2065 markers = []
2058 for key in sorted(remoteobs, reverse=True):
2066 for key in sorted(remoteobs, reverse=True):
2059 if key.startswith(b'dump'):
2067 if key.startswith(b'dump'):
2060 data = util.b85decode(remoteobs[key])
2068 data = util.b85decode(remoteobs[key])
2061 version, newmarks = obsolete._readmarkers(data)
2069 version, newmarks = obsolete._readmarkers(data)
2062 markers += newmarks
2070 markers += newmarks
2063 if markers:
2071 if markers:
2064 pullop.repo.obsstore.add(tr, markers)
2072 pullop.repo.obsstore.add(tr, markers)
2065 pullop.repo.invalidatevolatilesets()
2073 pullop.repo.invalidatevolatilesets()
2066 return tr
2074 return tr
2067
2075
2068
2076
2069 def applynarrowacl(repo, kwargs):
2077 def applynarrowacl(repo, kwargs):
2070 """Apply narrow fetch access control.
2078 """Apply narrow fetch access control.
2071
2079
2072 This massages the named arguments for getbundle wire protocol commands
2080 This massages the named arguments for getbundle wire protocol commands
2073 so requested data is filtered through access control rules.
2081 so requested data is filtered through access control rules.
2074 """
2082 """
2075 ui = repo.ui
2083 ui = repo.ui
2076 # TODO this assumes existence of HTTP and is a layering violation.
2084 # TODO this assumes existence of HTTP and is a layering violation.
2077 username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username())
2085 username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username())
2078 user_includes = ui.configlist(
2086 user_includes = ui.configlist(
2079 _NARROWACL_SECTION,
2087 _NARROWACL_SECTION,
2080 username + b'.includes',
2088 username + b'.includes',
2081 ui.configlist(_NARROWACL_SECTION, b'default.includes'),
2089 ui.configlist(_NARROWACL_SECTION, b'default.includes'),
2082 )
2090 )
2083 user_excludes = ui.configlist(
2091 user_excludes = ui.configlist(
2084 _NARROWACL_SECTION,
2092 _NARROWACL_SECTION,
2085 username + b'.excludes',
2093 username + b'.excludes',
2086 ui.configlist(_NARROWACL_SECTION, b'default.excludes'),
2094 ui.configlist(_NARROWACL_SECTION, b'default.excludes'),
2087 )
2095 )
2088 if not user_includes:
2096 if not user_includes:
2089 raise error.Abort(
2097 raise error.Abort(
2090 _(b"%s configuration for user %s is empty")
2098 _(b"%s configuration for user %s is empty")
2091 % (_NARROWACL_SECTION, username)
2099 % (_NARROWACL_SECTION, username)
2092 )
2100 )
2093
2101
2094 user_includes = [
2102 user_includes = [
2095 b'path:.' if p == b'*' else b'path:' + p for p in user_includes
2103 b'path:.' if p == b'*' else b'path:' + p for p in user_includes
2096 ]
2104 ]
2097 user_excludes = [
2105 user_excludes = [
2098 b'path:.' if p == b'*' else b'path:' + p for p in user_excludes
2106 b'path:.' if p == b'*' else b'path:' + p for p in user_excludes
2099 ]
2107 ]
2100
2108
2101 req_includes = set(kwargs.get('includepats', []))
2109 req_includes = set(kwargs.get('includepats', []))
2102 req_excludes = set(kwargs.get('excludepats', []))
2110 req_excludes = set(kwargs.get('excludepats', []))
2103
2111
2104 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
2112 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
2105 req_includes, req_excludes, user_includes, user_excludes
2113 req_includes, req_excludes, user_includes, user_excludes
2106 )
2114 )
2107
2115
2108 if invalid_includes:
2116 if invalid_includes:
2109 raise error.Abort(
2117 raise error.Abort(
2110 _(b"The following includes are not accessible for %s: %s")
2118 _(b"The following includes are not accessible for %s: %s")
2111 % (username, stringutil.pprint(invalid_includes))
2119 % (username, stringutil.pprint(invalid_includes))
2112 )
2120 )
2113
2121
2114 new_args = {}
2122 new_args = {}
2115 new_args.update(kwargs)
2123 new_args.update(kwargs)
2116 new_args['narrow'] = True
2124 new_args['narrow'] = True
2117 new_args['narrow_acl'] = True
2125 new_args['narrow_acl'] = True
2118 new_args['includepats'] = req_includes
2126 new_args['includepats'] = req_includes
2119 if req_excludes:
2127 if req_excludes:
2120 new_args['excludepats'] = req_excludes
2128 new_args['excludepats'] = req_excludes
2121
2129
2122 return new_args
2130 return new_args
2123
2131
2124
2132
2125 def _computeellipsis(repo, common, heads, known, match, depth=None):
2133 def _computeellipsis(repo, common, heads, known, match, depth=None):
2126 """Compute the shape of a narrowed DAG.
2134 """Compute the shape of a narrowed DAG.
2127
2135
2128 Args:
2136 Args:
2129 repo: The repository we're transferring.
2137 repo: The repository we're transferring.
2130 common: The roots of the DAG range we're transferring.
2138 common: The roots of the DAG range we're transferring.
2131 May be just [nullid], which means all ancestors of heads.
2139 May be just [nullid], which means all ancestors of heads.
2132 heads: The heads of the DAG range we're transferring.
2140 heads: The heads of the DAG range we're transferring.
2133 match: The narrowmatcher that allows us to identify relevant changes.
2141 match: The narrowmatcher that allows us to identify relevant changes.
2134 depth: If not None, only consider nodes to be full nodes if they are at
2142 depth: If not None, only consider nodes to be full nodes if they are at
2135 most depth changesets away from one of heads.
2143 most depth changesets away from one of heads.
2136
2144
2137 Returns:
2145 Returns:
2138 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
2146 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
2139
2147
2140 visitnodes: The list of nodes (either full or ellipsis) which
2148 visitnodes: The list of nodes (either full or ellipsis) which
2141 need to be sent to the client.
2149 need to be sent to the client.
2142 relevant_nodes: The set of changelog nodes which change a file inside
2150 relevant_nodes: The set of changelog nodes which change a file inside
2143 the narrowspec. The client needs these as non-ellipsis nodes.
2151 the narrowspec. The client needs these as non-ellipsis nodes.
2144 ellipsisroots: A dict of {rev: parents} that is used in
2152 ellipsisroots: A dict of {rev: parents} that is used in
2145 narrowchangegroup to produce ellipsis nodes with the
2153 narrowchangegroup to produce ellipsis nodes with the
2146 correct parents.
2154 correct parents.
2147 """
2155 """
2148 cl = repo.changelog
2156 cl = repo.changelog
2149 mfl = repo.manifestlog
2157 mfl = repo.manifestlog
2150
2158
2151 clrev = cl.rev
2159 clrev = cl.rev
2152
2160
2153 commonrevs = {clrev(n) for n in common} | {nullrev}
2161 commonrevs = {clrev(n) for n in common} | {nullrev}
2154 headsrevs = {clrev(n) for n in heads}
2162 headsrevs = {clrev(n) for n in heads}
2155
2163
2156 if depth:
2164 if depth:
2157 revdepth = {h: 0 for h in headsrevs}
2165 revdepth = {h: 0 for h in headsrevs}
2158
2166
2159 ellipsisheads = collections.defaultdict(set)
2167 ellipsisheads = collections.defaultdict(set)
2160 ellipsisroots = collections.defaultdict(set)
2168 ellipsisroots = collections.defaultdict(set)
2161
2169
2162 def addroot(head, curchange):
2170 def addroot(head, curchange):
2163 """Add a root to an ellipsis head, splitting heads with 3 roots."""
2171 """Add a root to an ellipsis head, splitting heads with 3 roots."""
2164 ellipsisroots[head].add(curchange)
2172 ellipsisroots[head].add(curchange)
2165 # Recursively split ellipsis heads with 3 roots by finding the
2173 # Recursively split ellipsis heads with 3 roots by finding the
2166 # roots' youngest common descendant which is an elided merge commit.
2174 # roots' youngest common descendant which is an elided merge commit.
2167 # That descendant takes 2 of the 3 roots as its own, and becomes a
2175 # That descendant takes 2 of the 3 roots as its own, and becomes a
2168 # root of the head.
2176 # root of the head.
2169 while len(ellipsisroots[head]) > 2:
2177 while len(ellipsisroots[head]) > 2:
2170 child, roots = splithead(head)
2178 child, roots = splithead(head)
2171 splitroots(head, child, roots)
2179 splitroots(head, child, roots)
2172 head = child # Recurse in case we just added a 3rd root
2180 head = child # Recurse in case we just added a 3rd root
2173
2181
2174 def splitroots(head, child, roots):
2182 def splitroots(head, child, roots):
2175 ellipsisroots[head].difference_update(roots)
2183 ellipsisroots[head].difference_update(roots)
2176 ellipsisroots[head].add(child)
2184 ellipsisroots[head].add(child)
2177 ellipsisroots[child].update(roots)
2185 ellipsisroots[child].update(roots)
2178 ellipsisroots[child].discard(child)
2186 ellipsisroots[child].discard(child)
2179
2187
2180 def splithead(head):
2188 def splithead(head):
2181 r1, r2, r3 = sorted(ellipsisroots[head])
2189 r1, r2, r3 = sorted(ellipsisroots[head])
2182 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
2190 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
2183 mid = repo.revs(
2191 mid = repo.revs(
2184 b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head
2192 b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head
2185 )
2193 )
2186 for j in mid:
2194 for j in mid:
2187 if j == nr2:
2195 if j == nr2:
2188 return nr2, (nr1, nr2)
2196 return nr2, (nr1, nr2)
2189 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
2197 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
2190 return j, (nr1, nr2)
2198 return j, (nr1, nr2)
2191 raise error.Abort(
2199 raise error.Abort(
2192 _(
2200 _(
2193 b'Failed to split up ellipsis node! head: %d, '
2201 b'Failed to split up ellipsis node! head: %d, '
2194 b'roots: %d %d %d'
2202 b'roots: %d %d %d'
2195 )
2203 )
2196 % (head, r1, r2, r3)
2204 % (head, r1, r2, r3)
2197 )
2205 )
2198
2206
2199 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
2207 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
2200 visit = reversed(missing)
2208 visit = reversed(missing)
2201 relevant_nodes = set()
2209 relevant_nodes = set()
2202 visitnodes = [cl.node(m) for m in missing]
2210 visitnodes = [cl.node(m) for m in missing]
2203 required = set(headsrevs) | known
2211 required = set(headsrevs) | known
2204 for rev in visit:
2212 for rev in visit:
2205 clrev = cl.changelogrevision(rev)
2213 clrev = cl.changelogrevision(rev)
2206 ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev]
2214 ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev]
2207 if depth is not None:
2215 if depth is not None:
2208 curdepth = revdepth[rev]
2216 curdepth = revdepth[rev]
2209 for p in ps:
2217 for p in ps:
2210 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
2218 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
2211 needed = False
2219 needed = False
2212 shallow_enough = depth is None or revdepth[rev] <= depth
2220 shallow_enough = depth is None or revdepth[rev] <= depth
2213 if shallow_enough:
2221 if shallow_enough:
2214 curmf = mfl[clrev.manifest].read()
2222 curmf = mfl[clrev.manifest].read()
2215 if ps:
2223 if ps:
2216 # We choose to not trust the changed files list in
2224 # We choose to not trust the changed files list in
2217 # changesets because it's not always correct. TODO: could
2225 # changesets because it's not always correct. TODO: could
2218 # we trust it for the non-merge case?
2226 # we trust it for the non-merge case?
2219 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
2227 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
2220 needed = bool(curmf.diff(p1mf, match))
2228 needed = bool(curmf.diff(p1mf, match))
2221 if not needed and len(ps) > 1:
2229 if not needed and len(ps) > 1:
2222 # For merge changes, the list of changed files is not
2230 # For merge changes, the list of changed files is not
2223 # helpful, since we need to emit the merge if a file
2231 # helpful, since we need to emit the merge if a file
2224 # in the narrow spec has changed on either side of the
2232 # in the narrow spec has changed on either side of the
2225 # merge. As a result, we do a manifest diff to check.
2233 # merge. As a result, we do a manifest diff to check.
2226 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
2234 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
2227 needed = bool(curmf.diff(p2mf, match))
2235 needed = bool(curmf.diff(p2mf, match))
2228 else:
2236 else:
2229 # For a root node, we need to include the node if any
2237 # For a root node, we need to include the node if any
2230 # files in the node match the narrowspec.
2238 # files in the node match the narrowspec.
2231 needed = any(curmf.walk(match))
2239 needed = any(curmf.walk(match))
2232
2240
2233 if needed:
2241 if needed:
2234 for head in ellipsisheads[rev]:
2242 for head in ellipsisheads[rev]:
2235 addroot(head, rev)
2243 addroot(head, rev)
2236 for p in ps:
2244 for p in ps:
2237 required.add(p)
2245 required.add(p)
2238 relevant_nodes.add(cl.node(rev))
2246 relevant_nodes.add(cl.node(rev))
2239 else:
2247 else:
2240 if not ps:
2248 if not ps:
2241 ps = [nullrev]
2249 ps = [nullrev]
2242 if rev in required:
2250 if rev in required:
2243 for head in ellipsisheads[rev]:
2251 for head in ellipsisheads[rev]:
2244 addroot(head, rev)
2252 addroot(head, rev)
2245 for p in ps:
2253 for p in ps:
2246 ellipsisheads[p].add(rev)
2254 ellipsisheads[p].add(rev)
2247 else:
2255 else:
2248 for p in ps:
2256 for p in ps:
2249 ellipsisheads[p] |= ellipsisheads[rev]
2257 ellipsisheads[p] |= ellipsisheads[rev]
2250
2258
2251 # add common changesets as roots of their reachable ellipsis heads
2259 # add common changesets as roots of their reachable ellipsis heads
2252 for c in commonrevs:
2260 for c in commonrevs:
2253 for head in ellipsisheads[c]:
2261 for head in ellipsisheads[c]:
2254 addroot(head, c)
2262 addroot(head, c)
2255 return visitnodes, relevant_nodes, ellipsisroots
2263 return visitnodes, relevant_nodes, ellipsisroots
2256
2264
2257
2265
2258 def caps20to10(repo, role):
2266 def caps20to10(repo, role):
2259 """return a set with appropriate options to use bundle20 during getbundle"""
2267 """return a set with appropriate options to use bundle20 during getbundle"""
2260 caps = {b'HG20'}
2268 caps = {b'HG20'}
2261 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
2269 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
2262 caps.add(b'bundle2=' + urlreq.quote(capsblob))
2270 caps.add(b'bundle2=' + urlreq.quote(capsblob))
2263 return caps
2271 return caps
2264
2272
2265
2273
2266 # List of names of steps to perform for a bundle2 for getbundle, order matters.
2274 # List of names of steps to perform for a bundle2 for getbundle, order matters.
2267 getbundle2partsorder = []
2275 getbundle2partsorder = []
2268
2276
2269 # Mapping between step name and function
2277 # Mapping between step name and function
2270 #
2278 #
2271 # This exists to help extensions wrap steps if necessary
2279 # This exists to help extensions wrap steps if necessary
2272 getbundle2partsmapping = {}
2280 getbundle2partsmapping = {}
2273
2281
2274
2282
2275 def getbundle2partsgenerator(stepname, idx=None):
2283 def getbundle2partsgenerator(stepname, idx=None):
2276 """decorator for function generating bundle2 part for getbundle
2284 """decorator for function generating bundle2 part for getbundle
2277
2285
2278 The function is added to the step -> function mapping and appended to the
2286 The function is added to the step -> function mapping and appended to the
2279 list of steps. Beware that decorated functions will be added in order
2287 list of steps. Beware that decorated functions will be added in order
2280 (this may matter).
2288 (this may matter).
2281
2289
2282 You can only use this decorator for new steps, if you want to wrap a step
2290 You can only use this decorator for new steps, if you want to wrap a step
2283 from an extension, attack the getbundle2partsmapping dictionary directly."""
2291 from an extension, attack the getbundle2partsmapping dictionary directly."""
2284
2292
2285 def dec(func):
2293 def dec(func):
2286 assert stepname not in getbundle2partsmapping
2294 assert stepname not in getbundle2partsmapping
2287 getbundle2partsmapping[stepname] = func
2295 getbundle2partsmapping[stepname] = func
2288 if idx is None:
2296 if idx is None:
2289 getbundle2partsorder.append(stepname)
2297 getbundle2partsorder.append(stepname)
2290 else:
2298 else:
2291 getbundle2partsorder.insert(idx, stepname)
2299 getbundle2partsorder.insert(idx, stepname)
2292 return func
2300 return func
2293
2301
2294 return dec
2302 return dec
2295
2303
2296
2304
2297 def bundle2requested(bundlecaps):
2305 def bundle2requested(bundlecaps):
2298 if bundlecaps is not None:
2306 if bundlecaps is not None:
2299 return any(cap.startswith(b'HG2') for cap in bundlecaps)
2307 return any(cap.startswith(b'HG2') for cap in bundlecaps)
2300 return False
2308 return False
2301
2309
2302
2310
2303 def getbundlechunks(
2311 def getbundlechunks(
2304 repo,
2312 repo,
2305 source,
2313 source,
2306 heads=None,
2314 heads=None,
2307 common=None,
2315 common=None,
2308 bundlecaps=None,
2316 bundlecaps=None,
2309 remote_sidedata=None,
2317 remote_sidedata=None,
2310 **kwargs
2318 **kwargs
2311 ):
2319 ):
2312 """Return chunks constituting a bundle's raw data.
2320 """Return chunks constituting a bundle's raw data.
2313
2321
2314 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
2322 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
2315 passed.
2323 passed.
2316
2324
2317 Returns a 2-tuple of a dict with metadata about the generated bundle
2325 Returns a 2-tuple of a dict with metadata about the generated bundle
2318 and an iterator over raw chunks (of varying sizes).
2326 and an iterator over raw chunks (of varying sizes).
2319 """
2327 """
2320 kwargs = pycompat.byteskwargs(kwargs)
2328 kwargs = pycompat.byteskwargs(kwargs)
2321 info = {}
2329 info = {}
2322 usebundle2 = bundle2requested(bundlecaps)
2330 usebundle2 = bundle2requested(bundlecaps)
2323 # bundle10 case
2331 # bundle10 case
2324 if not usebundle2:
2332 if not usebundle2:
2325 if bundlecaps and not kwargs.get(b'cg', True):
2333 if bundlecaps and not kwargs.get(b'cg', True):
2326 raise ValueError(
2334 raise ValueError(
2327 _(b'request for bundle10 must include changegroup')
2335 _(b'request for bundle10 must include changegroup')
2328 )
2336 )
2329
2337
2330 if kwargs:
2338 if kwargs:
2331 raise ValueError(
2339 raise ValueError(
2332 _(b'unsupported getbundle arguments: %s')
2340 _(b'unsupported getbundle arguments: %s')
2333 % b', '.join(sorted(kwargs.keys()))
2341 % b', '.join(sorted(kwargs.keys()))
2334 )
2342 )
2335 outgoing = _computeoutgoing(repo, heads, common)
2343 outgoing = _computeoutgoing(repo, heads, common)
2336 info[b'bundleversion'] = 1
2344 info[b'bundleversion'] = 1
2337 return (
2345 return (
2338 info,
2346 info,
2339 changegroup.makestream(
2347 changegroup.makestream(
2340 repo,
2348 repo,
2341 outgoing,
2349 outgoing,
2342 b'01',
2350 b'01',
2343 source,
2351 source,
2344 bundlecaps=bundlecaps,
2352 bundlecaps=bundlecaps,
2345 remote_sidedata=remote_sidedata,
2353 remote_sidedata=remote_sidedata,
2346 ),
2354 ),
2347 )
2355 )
2348
2356
2349 # bundle20 case
2357 # bundle20 case
2350 info[b'bundleversion'] = 2
2358 info[b'bundleversion'] = 2
2351 b2caps = {}
2359 b2caps = {}
2352 for bcaps in bundlecaps:
2360 for bcaps in bundlecaps:
2353 if bcaps.startswith(b'bundle2='):
2361 if bcaps.startswith(b'bundle2='):
2354 blob = urlreq.unquote(bcaps[len(b'bundle2=') :])
2362 blob = urlreq.unquote(bcaps[len(b'bundle2=') :])
2355 b2caps.update(bundle2.decodecaps(blob))
2363 b2caps.update(bundle2.decodecaps(blob))
2356 bundler = bundle2.bundle20(repo.ui, b2caps)
2364 bundler = bundle2.bundle20(repo.ui, b2caps)
2357
2365
2358 kwargs[b'heads'] = heads
2366 kwargs[b'heads'] = heads
2359 kwargs[b'common'] = common
2367 kwargs[b'common'] = common
2360
2368
2361 for name in getbundle2partsorder:
2369 for name in getbundle2partsorder:
2362 func = getbundle2partsmapping[name]
2370 func = getbundle2partsmapping[name]
2363 func(
2371 func(
2364 bundler,
2372 bundler,
2365 repo,
2373 repo,
2366 source,
2374 source,
2367 bundlecaps=bundlecaps,
2375 bundlecaps=bundlecaps,
2368 b2caps=b2caps,
2376 b2caps=b2caps,
2369 remote_sidedata=remote_sidedata,
2377 remote_sidedata=remote_sidedata,
2370 **pycompat.strkwargs(kwargs)
2378 **pycompat.strkwargs(kwargs)
2371 )
2379 )
2372
2380
2373 info[b'prefercompressed'] = bundler.prefercompressed
2381 info[b'prefercompressed'] = bundler.prefercompressed
2374
2382
2375 return info, bundler.getchunks()
2383 return info, bundler.getchunks()
2376
2384
2377
2385
2378 @getbundle2partsgenerator(b'stream2')
2386 @getbundle2partsgenerator(b'stream2')
2379 def _getbundlestream2(bundler, repo, *args, **kwargs):
2387 def _getbundlestream2(bundler, repo, *args, **kwargs):
2380 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
2388 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
2381
2389
2382
2390
2383 @getbundle2partsgenerator(b'changegroup')
2391 @getbundle2partsgenerator(b'changegroup')
2384 def _getbundlechangegrouppart(
2392 def _getbundlechangegrouppart(
2385 bundler,
2393 bundler,
2386 repo,
2394 repo,
2387 source,
2395 source,
2388 bundlecaps=None,
2396 bundlecaps=None,
2389 b2caps=None,
2397 b2caps=None,
2390 heads=None,
2398 heads=None,
2391 common=None,
2399 common=None,
2392 remote_sidedata=None,
2400 remote_sidedata=None,
2393 **kwargs
2401 **kwargs
2394 ):
2402 ):
2395 """add a changegroup part to the requested bundle"""
2403 """add a changegroup part to the requested bundle"""
2396 if not kwargs.get('cg', True) or not b2caps:
2404 if not kwargs.get('cg', True) or not b2caps:
2397 return
2405 return
2398
2406
2399 version = b'01'
2407 version = b'01'
2400 cgversions = b2caps.get(b'changegroup')
2408 cgversions = b2caps.get(b'changegroup')
2401 if cgversions: # 3.1 and 3.2 ship with an empty value
2409 if cgversions: # 3.1 and 3.2 ship with an empty value
2402 cgversions = [
2410 cgversions = [
2403 v
2411 v
2404 for v in cgversions
2412 for v in cgversions
2405 if v in changegroup.supportedoutgoingversions(repo)
2413 if v in changegroup.supportedoutgoingversions(repo)
2406 ]
2414 ]
2407 if not cgversions:
2415 if not cgversions:
2408 raise error.Abort(_(b'no common changegroup version'))
2416 raise error.Abort(_(b'no common changegroup version'))
2409 version = max(cgversions)
2417 version = max(cgversions)
2410
2418
2411 outgoing = _computeoutgoing(repo, heads, common)
2419 outgoing = _computeoutgoing(repo, heads, common)
2412 if not outgoing.missing:
2420 if not outgoing.missing:
2413 return
2421 return
2414
2422
2415 if kwargs.get('narrow', False):
2423 if kwargs.get('narrow', False):
2416 include = sorted(filter(bool, kwargs.get('includepats', [])))
2424 include = sorted(filter(bool, kwargs.get('includepats', [])))
2417 exclude = sorted(filter(bool, kwargs.get('excludepats', [])))
2425 exclude = sorted(filter(bool, kwargs.get('excludepats', [])))
2418 matcher = narrowspec.match(repo.root, include=include, exclude=exclude)
2426 matcher = narrowspec.match(repo.root, include=include, exclude=exclude)
2419 else:
2427 else:
2420 matcher = None
2428 matcher = None
2421
2429
2422 cgstream = changegroup.makestream(
2430 cgstream = changegroup.makestream(
2423 repo,
2431 repo,
2424 outgoing,
2432 outgoing,
2425 version,
2433 version,
2426 source,
2434 source,
2427 bundlecaps=bundlecaps,
2435 bundlecaps=bundlecaps,
2428 matcher=matcher,
2436 matcher=matcher,
2429 remote_sidedata=remote_sidedata,
2437 remote_sidedata=remote_sidedata,
2430 )
2438 )
2431
2439
2432 part = bundler.newpart(b'changegroup', data=cgstream)
2440 part = bundler.newpart(b'changegroup', data=cgstream)
2433 if cgversions:
2441 if cgversions:
2434 part.addparam(b'version', version)
2442 part.addparam(b'version', version)
2435
2443
2436 part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False)
2444 part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False)
2437
2445
2438 if scmutil.istreemanifest(repo):
2446 if scmutil.istreemanifest(repo):
2439 part.addparam(b'treemanifest', b'1')
2447 part.addparam(b'treemanifest', b'1')
2440
2448
2441 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2449 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2442 part.addparam(b'exp-sidedata', b'1')
2450 part.addparam(b'exp-sidedata', b'1')
2443 sidedata = bundle2.format_remote_wanted_sidedata(repo)
2451 sidedata = bundle2.format_remote_wanted_sidedata(repo)
2444 part.addparam(b'exp-wanted-sidedata', sidedata)
2452 part.addparam(b'exp-wanted-sidedata', sidedata)
2445
2453
2446 if (
2454 if (
2447 kwargs.get('narrow', False)
2455 kwargs.get('narrow', False)
2448 and kwargs.get('narrow_acl', False)
2456 and kwargs.get('narrow_acl', False)
2449 and (include or exclude)
2457 and (include or exclude)
2450 ):
2458 ):
2451 # this is mandatory because otherwise ACL clients won't work
2459 # this is mandatory because otherwise ACL clients won't work
2452 narrowspecpart = bundler.newpart(b'Narrow:responsespec')
2460 narrowspecpart = bundler.newpart(b'Narrow:responsespec')
2453 narrowspecpart.data = b'%s\0%s' % (
2461 narrowspecpart.data = b'%s\0%s' % (
2454 b'\n'.join(include),
2462 b'\n'.join(include),
2455 b'\n'.join(exclude),
2463 b'\n'.join(exclude),
2456 )
2464 )
2457
2465
2458
2466
2459 @getbundle2partsgenerator(b'bookmarks')
2467 @getbundle2partsgenerator(b'bookmarks')
2460 def _getbundlebookmarkpart(
2468 def _getbundlebookmarkpart(
2461 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2469 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2462 ):
2470 ):
2463 """add a bookmark part to the requested bundle"""
2471 """add a bookmark part to the requested bundle"""
2464 if not kwargs.get('bookmarks', False):
2472 if not kwargs.get('bookmarks', False):
2465 return
2473 return
2466 if not b2caps or b'bookmarks' not in b2caps:
2474 if not b2caps or b'bookmarks' not in b2caps:
2467 raise error.Abort(_(b'no common bookmarks exchange method'))
2475 raise error.Abort(_(b'no common bookmarks exchange method'))
2468 books = bookmod.listbinbookmarks(repo)
2476 books = bookmod.listbinbookmarks(repo)
2469 data = bookmod.binaryencode(repo, books)
2477 data = bookmod.binaryencode(repo, books)
2470 if data:
2478 if data:
2471 bundler.newpart(b'bookmarks', data=data)
2479 bundler.newpart(b'bookmarks', data=data)
2472
2480
2473
2481
2474 @getbundle2partsgenerator(b'listkeys')
2482 @getbundle2partsgenerator(b'listkeys')
2475 def _getbundlelistkeysparts(
2483 def _getbundlelistkeysparts(
2476 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2484 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2477 ):
2485 ):
2478 """add parts containing listkeys namespaces to the requested bundle"""
2486 """add parts containing listkeys namespaces to the requested bundle"""
2479 listkeys = kwargs.get('listkeys', ())
2487 listkeys = kwargs.get('listkeys', ())
2480 for namespace in listkeys:
2488 for namespace in listkeys:
2481 part = bundler.newpart(b'listkeys')
2489 part = bundler.newpart(b'listkeys')
2482 part.addparam(b'namespace', namespace)
2490 part.addparam(b'namespace', namespace)
2483 keys = repo.listkeys(namespace).items()
2491 keys = repo.listkeys(namespace).items()
2484 part.data = pushkey.encodekeys(keys)
2492 part.data = pushkey.encodekeys(keys)
2485
2493
2486
2494
2487 @getbundle2partsgenerator(b'obsmarkers')
2495 @getbundle2partsgenerator(b'obsmarkers')
2488 def _getbundleobsmarkerpart(
2496 def _getbundleobsmarkerpart(
2489 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2497 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2490 ):
2498 ):
2491 """add an obsolescence markers part to the requested bundle"""
2499 """add an obsolescence markers part to the requested bundle"""
2492 if kwargs.get('obsmarkers', False):
2500 if kwargs.get('obsmarkers', False):
2493 if heads is None:
2501 if heads is None:
2494 heads = repo.heads()
2502 heads = repo.heads()
2495 subset = [c.node() for c in repo.set(b'::%ln', heads)]
2503 subset = [c.node() for c in repo.set(b'::%ln', heads)]
2496 markers = repo.obsstore.relevantmarkers(subset)
2504 markers = repo.obsstore.relevantmarkers(subset)
2497 markers = obsutil.sortedmarkers(markers)
2505 markers = obsutil.sortedmarkers(markers)
2498 bundle2.buildobsmarkerspart(bundler, markers)
2506 bundle2.buildobsmarkerspart(bundler, markers)
2499
2507
2500
2508
2501 @getbundle2partsgenerator(b'phases')
2509 @getbundle2partsgenerator(b'phases')
2502 def _getbundlephasespart(
2510 def _getbundlephasespart(
2503 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2511 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2504 ):
2512 ):
2505 """add phase heads part to the requested bundle"""
2513 """add phase heads part to the requested bundle"""
2506 if kwargs.get('phases', False):
2514 if kwargs.get('phases', False):
2507 if not b2caps or b'heads' not in b2caps.get(b'phases'):
2515 if not b2caps or b'heads' not in b2caps.get(b'phases'):
2508 raise error.Abort(_(b'no common phases exchange method'))
2516 raise error.Abort(_(b'no common phases exchange method'))
2509 if heads is None:
2517 if heads is None:
2510 heads = repo.heads()
2518 heads = repo.heads()
2511
2519
2512 headsbyphase = collections.defaultdict(set)
2520 headsbyphase = collections.defaultdict(set)
2513 if repo.publishing():
2521 if repo.publishing():
2514 headsbyphase[phases.public] = heads
2522 headsbyphase[phases.public] = heads
2515 else:
2523 else:
2516 # find the appropriate heads to move
2524 # find the appropriate heads to move
2517
2525
2518 phase = repo._phasecache.phase
2526 phase = repo._phasecache.phase
2519 node = repo.changelog.node
2527 node = repo.changelog.node
2520 rev = repo.changelog.rev
2528 rev = repo.changelog.rev
2521 for h in heads:
2529 for h in heads:
2522 headsbyphase[phase(repo, rev(h))].add(h)
2530 headsbyphase[phase(repo, rev(h))].add(h)
2523 seenphases = list(headsbyphase.keys())
2531 seenphases = list(headsbyphase.keys())
2524
2532
2525 # We do not handle anything but public and draft phase for now)
2533 # We do not handle anything but public and draft phase for now)
2526 if seenphases:
2534 if seenphases:
2527 assert max(seenphases) <= phases.draft
2535 assert max(seenphases) <= phases.draft
2528
2536
2529 # if client is pulling non-public changesets, we need to find
2537 # if client is pulling non-public changesets, we need to find
2530 # intermediate public heads.
2538 # intermediate public heads.
2531 draftheads = headsbyphase.get(phases.draft, set())
2539 draftheads = headsbyphase.get(phases.draft, set())
2532 if draftheads:
2540 if draftheads:
2533 publicheads = headsbyphase.get(phases.public, set())
2541 publicheads = headsbyphase.get(phases.public, set())
2534
2542
2535 revset = b'heads(only(%ln, %ln) and public())'
2543 revset = b'heads(only(%ln, %ln) and public())'
2536 extraheads = repo.revs(revset, draftheads, publicheads)
2544 extraheads = repo.revs(revset, draftheads, publicheads)
2537 for r in extraheads:
2545 for r in extraheads:
2538 headsbyphase[phases.public].add(node(r))
2546 headsbyphase[phases.public].add(node(r))
2539
2547
2540 # transform data in a format used by the encoding function
2548 # transform data in a format used by the encoding function
2541 phasemapping = {
2549 phasemapping = {
2542 phase: sorted(headsbyphase[phase]) for phase in phases.allphases
2550 phase: sorted(headsbyphase[phase]) for phase in phases.allphases
2543 }
2551 }
2544
2552
2545 # generate the actual part
2553 # generate the actual part
2546 phasedata = phases.binaryencode(phasemapping)
2554 phasedata = phases.binaryencode(phasemapping)
2547 bundler.newpart(b'phase-heads', data=phasedata)
2555 bundler.newpart(b'phase-heads', data=phasedata)
2548
2556
2549
2557
2550 @getbundle2partsgenerator(b'hgtagsfnodes')
2558 @getbundle2partsgenerator(b'hgtagsfnodes')
2551 def _getbundletagsfnodes(
2559 def _getbundletagsfnodes(
2552 bundler,
2560 bundler,
2553 repo,
2561 repo,
2554 source,
2562 source,
2555 bundlecaps=None,
2563 bundlecaps=None,
2556 b2caps=None,
2564 b2caps=None,
2557 heads=None,
2565 heads=None,
2558 common=None,
2566 common=None,
2559 **kwargs
2567 **kwargs
2560 ):
2568 ):
2561 """Transfer the .hgtags filenodes mapping.
2569 """Transfer the .hgtags filenodes mapping.
2562
2570
2563 Only values for heads in this bundle will be transferred.
2571 Only values for heads in this bundle will be transferred.
2564
2572
2565 The part data consists of pairs of 20 byte changeset node and .hgtags
2573 The part data consists of pairs of 20 byte changeset node and .hgtags
2566 filenodes raw values.
2574 filenodes raw values.
2567 """
2575 """
2568 # Don't send unless:
2576 # Don't send unless:
2569 # - changeset are being exchanged,
2577 # - changeset are being exchanged,
2570 # - the client supports it.
2578 # - the client supports it.
2571 if not b2caps or not (kwargs.get('cg', True) and b'hgtagsfnodes' in b2caps):
2579 if not b2caps or not (kwargs.get('cg', True) and b'hgtagsfnodes' in b2caps):
2572 return
2580 return
2573
2581
2574 outgoing = _computeoutgoing(repo, heads, common)
2582 outgoing = _computeoutgoing(repo, heads, common)
2575 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
2583 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
2576
2584
2577
2585
2578 @getbundle2partsgenerator(b'cache:rev-branch-cache')
2586 @getbundle2partsgenerator(b'cache:rev-branch-cache')
2579 def _getbundlerevbranchcache(
2587 def _getbundlerevbranchcache(
2580 bundler,
2588 bundler,
2581 repo,
2589 repo,
2582 source,
2590 source,
2583 bundlecaps=None,
2591 bundlecaps=None,
2584 b2caps=None,
2592 b2caps=None,
2585 heads=None,
2593 heads=None,
2586 common=None,
2594 common=None,
2587 **kwargs
2595 **kwargs
2588 ):
2596 ):
2589 """Transfer the rev-branch-cache mapping
2597 """Transfer the rev-branch-cache mapping
2590
2598
2591 The payload is a series of data related to each branch
2599 The payload is a series of data related to each branch
2592
2600
2593 1) branch name length
2601 1) branch name length
2594 2) number of open heads
2602 2) number of open heads
2595 3) number of closed heads
2603 3) number of closed heads
2596 4) open heads nodes
2604 4) open heads nodes
2597 5) closed heads nodes
2605 5) closed heads nodes
2598 """
2606 """
2599 # Don't send unless:
2607 # Don't send unless:
2600 # - changeset are being exchanged,
2608 # - changeset are being exchanged,
2601 # - the client supports it.
2609 # - the client supports it.
2602 # - narrow bundle isn't in play (not currently compatible).
2610 # - narrow bundle isn't in play (not currently compatible).
2603 if (
2611 if (
2604 not kwargs.get('cg', True)
2612 not kwargs.get('cg', True)
2605 or not b2caps
2613 or not b2caps
2606 or b'rev-branch-cache' not in b2caps
2614 or b'rev-branch-cache' not in b2caps
2607 or kwargs.get('narrow', False)
2615 or kwargs.get('narrow', False)
2608 or repo.ui.has_section(_NARROWACL_SECTION)
2616 or repo.ui.has_section(_NARROWACL_SECTION)
2609 ):
2617 ):
2610 return
2618 return
2611
2619
2612 outgoing = _computeoutgoing(repo, heads, common)
2620 outgoing = _computeoutgoing(repo, heads, common)
2613 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2621 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2614
2622
2615
2623
2616 def check_heads(repo, their_heads, context):
2624 def check_heads(repo, their_heads, context):
2617 """check if the heads of a repo have been modified
2625 """check if the heads of a repo have been modified
2618
2626
2619 Used by peer for unbundling.
2627 Used by peer for unbundling.
2620 """
2628 """
2621 heads = repo.heads()
2629 heads = repo.heads()
2622 heads_hash = hashutil.sha1(b''.join(sorted(heads))).digest()
2630 heads_hash = hashutil.sha1(b''.join(sorted(heads))).digest()
2623 if not (
2631 if not (
2624 their_heads == [b'force']
2632 their_heads == [b'force']
2625 or their_heads == heads
2633 or their_heads == heads
2626 or their_heads == [b'hashed', heads_hash]
2634 or their_heads == [b'hashed', heads_hash]
2627 ):
2635 ):
2628 # someone else committed/pushed/unbundled while we
2636 # someone else committed/pushed/unbundled while we
2629 # were transferring data
2637 # were transferring data
2630 raise error.PushRaced(
2638 raise error.PushRaced(
2631 b'repository changed while %s - please try again' % context
2639 b'repository changed while %s - please try again' % context
2632 )
2640 )
2633
2641
2634
2642
2635 def unbundle(repo, cg, heads, source, url):
2643 def unbundle(repo, cg, heads, source, url):
2636 """Apply a bundle to a repo.
2644 """Apply a bundle to a repo.
2637
2645
2638 this function makes sure the repo is locked during the application and have
2646 this function makes sure the repo is locked during the application and have
2639 mechanism to check that no push race occurred between the creation of the
2647 mechanism to check that no push race occurred between the creation of the
2640 bundle and its application.
2648 bundle and its application.
2641
2649
2642 If the push was raced as PushRaced exception is raised."""
2650 If the push was raced as PushRaced exception is raised."""
2643 r = 0
2651 r = 0
2644 # need a transaction when processing a bundle2 stream
2652 # need a transaction when processing a bundle2 stream
2645 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2653 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2646 lockandtr = [None, None, None]
2654 lockandtr = [None, None, None]
2647 recordout = None
2655 recordout = None
2648 # quick fix for output mismatch with bundle2 in 3.4
2656 # quick fix for output mismatch with bundle2 in 3.4
2649 captureoutput = repo.ui.configbool(
2657 captureoutput = repo.ui.configbool(
2650 b'experimental', b'bundle2-output-capture'
2658 b'experimental', b'bundle2-output-capture'
2651 )
2659 )
2652 if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'):
2660 if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'):
2653 captureoutput = True
2661 captureoutput = True
2654 try:
2662 try:
2655 # note: outside bundle1, 'heads' is expected to be empty and this
2663 # note: outside bundle1, 'heads' is expected to be empty and this
2656 # 'check_heads' call wil be a no-op
2664 # 'check_heads' call wil be a no-op
2657 check_heads(repo, heads, b'uploading changes')
2665 check_heads(repo, heads, b'uploading changes')
2658 # push can proceed
2666 # push can proceed
2659 if not isinstance(cg, bundle2.unbundle20):
2667 if not isinstance(cg, bundle2.unbundle20):
2660 # legacy case: bundle1 (changegroup 01)
2668 # legacy case: bundle1 (changegroup 01)
2661 txnname = b"\n".join([source, urlutil.hidepassword(url)])
2669 txnname = b"\n".join([source, urlutil.hidepassword(url)])
2662 with repo.lock(), repo.transaction(txnname) as tr:
2670 with repo.lock(), repo.transaction(txnname) as tr:
2663 op = bundle2.applybundle(repo, cg, tr, source, url)
2671 op = bundle2.applybundle(repo, cg, tr, source, url)
2664 r = bundle2.combinechangegroupresults(op)
2672 r = bundle2.combinechangegroupresults(op)
2665 else:
2673 else:
2666 r = None
2674 r = None
2667 try:
2675 try:
2668
2676
2669 def gettransaction():
2677 def gettransaction():
2670 if not lockandtr[2]:
2678 if not lockandtr[2]:
2671 if not bookmod.bookmarksinstore(repo):
2679 if not bookmod.bookmarksinstore(repo):
2672 lockandtr[0] = repo.wlock()
2680 lockandtr[0] = repo.wlock()
2673 lockandtr[1] = repo.lock()
2681 lockandtr[1] = repo.lock()
2674 lockandtr[2] = repo.transaction(source)
2682 lockandtr[2] = repo.transaction(source)
2675 lockandtr[2].hookargs[b'source'] = source
2683 lockandtr[2].hookargs[b'source'] = source
2676 lockandtr[2].hookargs[b'url'] = url
2684 lockandtr[2].hookargs[b'url'] = url
2677 lockandtr[2].hookargs[b'bundle2'] = b'1'
2685 lockandtr[2].hookargs[b'bundle2'] = b'1'
2678 return lockandtr[2]
2686 return lockandtr[2]
2679
2687
2680 # Do greedy locking by default until we're satisfied with lazy
2688 # Do greedy locking by default until we're satisfied with lazy
2681 # locking.
2689 # locking.
2682 if not repo.ui.configbool(
2690 if not repo.ui.configbool(
2683 b'experimental', b'bundle2lazylocking'
2691 b'experimental', b'bundle2lazylocking'
2684 ):
2692 ):
2685 gettransaction()
2693 gettransaction()
2686
2694
2687 op = bundle2.bundleoperation(
2695 op = bundle2.bundleoperation(
2688 repo,
2696 repo,
2689 gettransaction,
2697 gettransaction,
2690 captureoutput=captureoutput,
2698 captureoutput=captureoutput,
2691 source=b'push',
2699 source=b'push',
2692 )
2700 )
2693 try:
2701 try:
2694 op = bundle2.processbundle(repo, cg, op=op)
2702 op = bundle2.processbundle(repo, cg, op=op)
2695 finally:
2703 finally:
2696 r = op.reply
2704 r = op.reply
2697 if captureoutput and r is not None:
2705 if captureoutput and r is not None:
2698 repo.ui.pushbuffer(error=True, subproc=True)
2706 repo.ui.pushbuffer(error=True, subproc=True)
2699
2707
2700 def recordout(output):
2708 def recordout(output):
2701 r.newpart(b'output', data=output, mandatory=False)
2709 r.newpart(b'output', data=output, mandatory=False)
2702
2710
2703 if lockandtr[2] is not None:
2711 if lockandtr[2] is not None:
2704 lockandtr[2].close()
2712 lockandtr[2].close()
2705 except BaseException as exc:
2713 except BaseException as exc:
2706 exc.duringunbundle2 = True
2714 exc.duringunbundle2 = True
2707 if captureoutput and r is not None:
2715 if captureoutput and r is not None:
2708 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2716 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2709
2717
2710 def recordout(output):
2718 def recordout(output):
2711 part = bundle2.bundlepart(
2719 part = bundle2.bundlepart(
2712 b'output', data=output, mandatory=False
2720 b'output', data=output, mandatory=False
2713 )
2721 )
2714 parts.append(part)
2722 parts.append(part)
2715
2723
2716 raise
2724 raise
2717 finally:
2725 finally:
2718 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2726 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2719 if recordout is not None:
2727 if recordout is not None:
2720 recordout(repo.ui.popbuffer())
2728 recordout(repo.ui.popbuffer())
2721 return r
2729 return r
2722
2730
2723
2731
2724 def _maybeapplyclonebundle(pullop):
2732 def _maybeapplyclonebundle(pullop):
2725 """Apply a clone bundle from a remote, if possible."""
2733 """Apply a clone bundle from a remote, if possible."""
2726
2734
2727 repo = pullop.repo
2735 repo = pullop.repo
2728 remote = pullop.remote
2736 remote = pullop.remote
2729
2737
2730 if not repo.ui.configbool(b'ui', b'clonebundles'):
2738 if not repo.ui.configbool(b'ui', b'clonebundles'):
2731 return
2739 return
2732
2740
2733 # Only run if local repo is empty.
2741 # Only run if local repo is empty.
2734 if len(repo):
2742 if len(repo):
2735 return
2743 return
2736
2744
2737 if pullop.heads:
2745 if pullop.heads:
2738 return
2746 return
2739
2747
2740 if not remote.capable(b'clonebundles'):
2748 if not remote.capable(b'clonebundles'):
2741 return
2749 return
2742
2750
2743 with remote.commandexecutor() as e:
2751 with remote.commandexecutor() as e:
2744 res = e.callcommand(b'clonebundles', {}).result()
2752 res = e.callcommand(b'clonebundles', {}).result()
2745
2753
2746 # If we call the wire protocol command, that's good enough to record the
2754 # If we call the wire protocol command, that's good enough to record the
2747 # attempt.
2755 # attempt.
2748 pullop.clonebundleattempted = True
2756 pullop.clonebundleattempted = True
2749
2757
2750 entries = bundlecaches.parseclonebundlesmanifest(repo, res)
2758 entries = bundlecaches.parseclonebundlesmanifest(repo, res)
2751 if not entries:
2759 if not entries:
2752 repo.ui.note(
2760 repo.ui.note(
2753 _(
2761 _(
2754 b'no clone bundles available on remote; '
2762 b'no clone bundles available on remote; '
2755 b'falling back to regular clone\n'
2763 b'falling back to regular clone\n'
2756 )
2764 )
2757 )
2765 )
2758 return
2766 return
2759
2767
2760 entries = bundlecaches.filterclonebundleentries(
2768 entries = bundlecaches.filterclonebundleentries(
2761 repo, entries, streamclonerequested=pullop.streamclonerequested
2769 repo, entries, streamclonerequested=pullop.streamclonerequested
2762 )
2770 )
2763
2771
2764 if not entries:
2772 if not entries:
2765 # There is a thundering herd concern here. However, if a server
2773 # There is a thundering herd concern here. However, if a server
2766 # operator doesn't advertise bundles appropriate for its clients,
2774 # operator doesn't advertise bundles appropriate for its clients,
2767 # they deserve what's coming. Furthermore, from a client's
2775 # they deserve what's coming. Furthermore, from a client's
2768 # perspective, no automatic fallback would mean not being able to
2776 # perspective, no automatic fallback would mean not being able to
2769 # clone!
2777 # clone!
2770 repo.ui.warn(
2778 repo.ui.warn(
2771 _(
2779 _(
2772 b'no compatible clone bundles available on server; '
2780 b'no compatible clone bundles available on server; '
2773 b'falling back to regular clone\n'
2781 b'falling back to regular clone\n'
2774 )
2782 )
2775 )
2783 )
2776 repo.ui.warn(
2784 repo.ui.warn(
2777 _(b'(you may want to report this to the server operator)\n')
2785 _(b'(you may want to report this to the server operator)\n')
2778 )
2786 )
2779 return
2787 return
2780
2788
2781 entries = bundlecaches.sortclonebundleentries(repo.ui, entries)
2789 entries = bundlecaches.sortclonebundleentries(repo.ui, entries)
2782
2790
2783 url = entries[0][b'URL']
2791 url = entries[0][b'URL']
2784 repo.ui.status(_(b'applying clone bundle from %s\n') % url)
2792 repo.ui.status(_(b'applying clone bundle from %s\n') % url)
2785 if trypullbundlefromurl(repo.ui, repo, url):
2793 if trypullbundlefromurl(repo.ui, repo, url):
2786 repo.ui.status(_(b'finished applying clone bundle\n'))
2794 repo.ui.status(_(b'finished applying clone bundle\n'))
2787 # Bundle failed.
2795 # Bundle failed.
2788 #
2796 #
2789 # We abort by default to avoid the thundering herd of
2797 # We abort by default to avoid the thundering herd of
2790 # clients flooding a server that was expecting expensive
2798 # clients flooding a server that was expecting expensive
2791 # clone load to be offloaded.
2799 # clone load to be offloaded.
2792 elif repo.ui.configbool(b'ui', b'clonebundlefallback'):
2800 elif repo.ui.configbool(b'ui', b'clonebundlefallback'):
2793 repo.ui.warn(_(b'falling back to normal clone\n'))
2801 repo.ui.warn(_(b'falling back to normal clone\n'))
2794 else:
2802 else:
2795 raise error.Abort(
2803 raise error.Abort(
2796 _(b'error applying bundle'),
2804 _(b'error applying bundle'),
2797 hint=_(
2805 hint=_(
2798 b'if this error persists, consider contacting '
2806 b'if this error persists, consider contacting '
2799 b'the server operator or disable clone '
2807 b'the server operator or disable clone '
2800 b'bundles via '
2808 b'bundles via '
2801 b'"--config ui.clonebundles=false"'
2809 b'"--config ui.clonebundles=false"'
2802 ),
2810 ),
2803 )
2811 )
2804
2812
2805
2813
2806 def trypullbundlefromurl(ui, repo, url):
2814 def trypullbundlefromurl(ui, repo, url):
2807 """Attempt to apply a bundle from a URL."""
2815 """Attempt to apply a bundle from a URL."""
2808 with repo.lock(), repo.transaction(b'bundleurl') as tr:
2816 with repo.lock(), repo.transaction(b'bundleurl') as tr:
2809 try:
2817 try:
2810 fh = urlmod.open(ui, url)
2818 fh = urlmod.open(ui, url)
2811 cg = readbundle(ui, fh, b'stream')
2819 cg = readbundle(ui, fh, b'stream')
2812
2820
2813 if isinstance(cg, streamclone.streamcloneapplier):
2821 if isinstance(cg, streamclone.streamcloneapplier):
2814 cg.apply(repo)
2822 cg.apply(repo)
2815 else:
2823 else:
2816 bundle2.applybundle(repo, cg, tr, b'clonebundles', url)
2824 bundle2.applybundle(repo, cg, tr, b'clonebundles', url)
2817 return True
2825 return True
2818 except urlerr.httperror as e:
2826 except urlerr.httperror as e:
2819 ui.warn(
2827 ui.warn(
2820 _(b'HTTP error fetching bundle: %s\n')
2828 _(b'HTTP error fetching bundle: %s\n')
2821 % stringutil.forcebytestr(e)
2829 % stringutil.forcebytestr(e)
2822 )
2830 )
2823 except urlerr.urlerror as e:
2831 except urlerr.urlerror as e:
2824 ui.warn(
2832 ui.warn(
2825 _(b'error fetching bundle: %s\n')
2833 _(b'error fetching bundle: %s\n')
2826 % stringutil.forcebytestr(e.reason)
2834 % stringutil.forcebytestr(e.reason)
2827 )
2835 )
2828
2836
2829 return False
2837 return False
@@ -1,23 +1,25 b''
1 == New Features ==
1 == New Features ==
2
2
3
3
4 == Default Format Change ==
4 == Default Format Change ==
5
5
6 These changes affects newly created repositories (or new clone) done with
6 These changes affects newly created repositories (or new clone) done with
7 Mercurial XXX.
7 Mercurial XXX.
8
8
9
9
10 == New Experimental Features ==
10 == New Experimental Features ==
11
11
12 == Bug Fixes ==
12 == Bug Fixes ==
13
13
14 The `--no-check` and `--no-merge` now properly overwrite the behavior from `commands.update.check`.
14 The `--no-check` and `--no-merge` now properly overwrite the behavior from `commands.update.check`.
15
15
16 == Backwards Compatibility Changes ==
16 == Backwards Compatibility Changes ==
17
17
18 The remotefilelog extension now requires an appropiate excludepattern
19 for subrepositories.
18
20
19 == Internal API Changes ==
21 == Internal API Changes ==
20
22
21 The following functions have been removed:
23 The following functions have been removed:
22
24
23 Miscellaneous:
25 Miscellaneous:
General Comments 0
You need to be logged in to leave comments. Login now