##// END OF EJS Templates
clonebundle: move the manifest filename to a constant...
marmoute -
r46370:80f32ec8 default
parent child Browse files
Show More
@@ -1,228 +1,229 b''
1 # This software may be used and distributed according to the terms of the
1 # This software may be used and distributed according to the terms of the
2 # GNU General Public License version 2 or any later version.
2 # GNU General Public License version 2 or any later version.
3
3
4 """advertise pre-generated bundles to seed clones
4 """advertise pre-generated bundles to seed clones
5
5
6 "clonebundles" is a server-side extension used to advertise the existence
6 "clonebundles" is a server-side extension used to advertise the existence
7 of pre-generated, externally hosted bundle files to clients that are
7 of pre-generated, externally hosted bundle files to clients that are
8 cloning so that cloning can be faster, more reliable, and require less
8 cloning so that cloning can be faster, more reliable, and require less
9 resources on the server. "pullbundles" is a related feature for sending
9 resources on the server. "pullbundles" is a related feature for sending
10 pre-generated bundle files to clients as part of pull operations.
10 pre-generated bundle files to clients as part of pull operations.
11
11
12 Cloning can be a CPU and I/O intensive operation on servers. Traditionally,
12 Cloning can be a CPU and I/O intensive operation on servers. Traditionally,
13 the server, in response to a client's request to clone, dynamically generates
13 the server, in response to a client's request to clone, dynamically generates
14 a bundle containing the entire repository content and sends it to the client.
14 a bundle containing the entire repository content and sends it to the client.
15 There is no caching on the server and the server will have to redundantly
15 There is no caching on the server and the server will have to redundantly
16 generate the same outgoing bundle in response to each clone request. For
16 generate the same outgoing bundle in response to each clone request. For
17 servers with large repositories or with high clone volume, the load from
17 servers with large repositories or with high clone volume, the load from
18 clones can make scaling the server challenging and costly.
18 clones can make scaling the server challenging and costly.
19
19
20 This extension provides server operators the ability to offload
20 This extension provides server operators the ability to offload
21 potentially expensive clone load to an external service. Pre-generated
21 potentially expensive clone load to an external service. Pre-generated
22 bundles also allow using more CPU intensive compression, reducing the
22 bundles also allow using more CPU intensive compression, reducing the
23 effective bandwidth requirements.
23 effective bandwidth requirements.
24
24
25 Here's how clone bundles work:
25 Here's how clone bundles work:
26
26
27 1. A server operator establishes a mechanism for making bundle files available
27 1. A server operator establishes a mechanism for making bundle files available
28 on a hosting service where Mercurial clients can fetch them.
28 on a hosting service where Mercurial clients can fetch them.
29 2. A manifest file listing available bundle URLs and some optional metadata
29 2. A manifest file listing available bundle URLs and some optional metadata
30 is added to the Mercurial repository on the server.
30 is added to the Mercurial repository on the server.
31 3. A client initiates a clone against a clone bundles aware server.
31 3. A client initiates a clone against a clone bundles aware server.
32 4. The client sees the server is advertising clone bundles and fetches the
32 4. The client sees the server is advertising clone bundles and fetches the
33 manifest listing available bundles.
33 manifest listing available bundles.
34 5. The client filters and sorts the available bundles based on what it
34 5. The client filters and sorts the available bundles based on what it
35 supports and prefers.
35 supports and prefers.
36 6. The client downloads and applies an available bundle from the
36 6. The client downloads and applies an available bundle from the
37 server-specified URL.
37 server-specified URL.
38 7. The client reconnects to the original server and performs the equivalent
38 7. The client reconnects to the original server and performs the equivalent
39 of :hg:`pull` to retrieve all repository data not in the bundle. (The
39 of :hg:`pull` to retrieve all repository data not in the bundle. (The
40 repository could have been updated between when the bundle was created
40 repository could have been updated between when the bundle was created
41 and when the client started the clone.) This may use "pullbundles".
41 and when the client started the clone.) This may use "pullbundles".
42
42
43 Instead of the server generating full repository bundles for every clone
43 Instead of the server generating full repository bundles for every clone
44 request, it generates full bundles once and they are subsequently reused to
44 request, it generates full bundles once and they are subsequently reused to
45 bootstrap new clones. The server may still transfer data at clone time.
45 bootstrap new clones. The server may still transfer data at clone time.
46 However, this is only data that has been added/changed since the bundle was
46 However, this is only data that has been added/changed since the bundle was
47 created. For large, established repositories, this can reduce server load for
47 created. For large, established repositories, this can reduce server load for
48 clones to less than 1% of original.
48 clones to less than 1% of original.
49
49
50 Here's how pullbundles work:
50 Here's how pullbundles work:
51
51
52 1. A manifest file listing available bundles and describing the revisions
52 1. A manifest file listing available bundles and describing the revisions
53 is added to the Mercurial repository on the server.
53 is added to the Mercurial repository on the server.
54 2. A new-enough client informs the server that it supports partial pulls
54 2. A new-enough client informs the server that it supports partial pulls
55 and initiates a pull.
55 and initiates a pull.
56 3. If the server has pull bundles enabled and sees the client advertising
56 3. If the server has pull bundles enabled and sees the client advertising
57 partial pulls, it checks for a matching pull bundle in the manifest.
57 partial pulls, it checks for a matching pull bundle in the manifest.
58 A bundle matches if the format is supported by the client, the client
58 A bundle matches if the format is supported by the client, the client
59 has the required revisions already and needs something from the bundle.
59 has the required revisions already and needs something from the bundle.
60 4. If there is at least one matching bundle, the server sends it to the client.
60 4. If there is at least one matching bundle, the server sends it to the client.
61 5. The client applies the bundle and notices that the server reply was
61 5. The client applies the bundle and notices that the server reply was
62 incomplete. It initiates another pull.
62 incomplete. It initiates another pull.
63
63
64 To work, this extension requires the following of server operators:
64 To work, this extension requires the following of server operators:
65
65
66 * Generating bundle files of repository content (typically periodically,
66 * Generating bundle files of repository content (typically periodically,
67 such as once per day).
67 such as once per day).
68 * Clone bundles: A file server that clients have network access to and that
68 * Clone bundles: A file server that clients have network access to and that
69 Python knows how to talk to through its normal URL handling facility
69 Python knows how to talk to through its normal URL handling facility
70 (typically an HTTP/HTTPS server).
70 (typically an HTTP/HTTPS server).
71 * A process for keeping the bundles manifest in sync with available bundle
71 * A process for keeping the bundles manifest in sync with available bundle
72 files.
72 files.
73
73
74 Strictly speaking, using a static file hosting server isn't required: a server
74 Strictly speaking, using a static file hosting server isn't required: a server
75 operator could use a dynamic service for retrieving bundle data. However,
75 operator could use a dynamic service for retrieving bundle data. However,
76 static file hosting services are simple and scalable and should be sufficient
76 static file hosting services are simple and scalable and should be sufficient
77 for most needs.
77 for most needs.
78
78
79 Bundle files can be generated with the :hg:`bundle` command. Typically
79 Bundle files can be generated with the :hg:`bundle` command. Typically
80 :hg:`bundle --all` is used to produce a bundle of the entire repository.
80 :hg:`bundle --all` is used to produce a bundle of the entire repository.
81
81
82 :hg:`debugcreatestreamclonebundle` can be used to produce a special
82 :hg:`debugcreatestreamclonebundle` can be used to produce a special
83 *streaming clonebundle*. These are bundle files that are extremely efficient
83 *streaming clonebundle*. These are bundle files that are extremely efficient
84 to produce and consume (read: fast). However, they are larger than
84 to produce and consume (read: fast). However, they are larger than
85 traditional bundle formats and require that clients support the exact set
85 traditional bundle formats and require that clients support the exact set
86 of repository data store formats in use by the repository that created them.
86 of repository data store formats in use by the repository that created them.
87 Typically, a newer server can serve data that is compatible with older clients.
87 Typically, a newer server can serve data that is compatible with older clients.
88 However, *streaming clone bundles* don't have this guarantee. **Server
88 However, *streaming clone bundles* don't have this guarantee. **Server
89 operators need to be aware that newer versions of Mercurial may produce
89 operators need to be aware that newer versions of Mercurial may produce
90 streaming clone bundles incompatible with older Mercurial versions.**
90 streaming clone bundles incompatible with older Mercurial versions.**
91
91
92 A server operator is responsible for creating a ``.hg/clonebundles.manifest``
92 A server operator is responsible for creating a ``.hg/clonebundles.manifest``
93 file containing the list of available bundle files suitable for seeding
93 file containing the list of available bundle files suitable for seeding
94 clones. If this file does not exist, the repository will not advertise the
94 clones. If this file does not exist, the repository will not advertise the
95 existence of clone bundles when clients connect. For pull bundles,
95 existence of clone bundles when clients connect. For pull bundles,
96 ``.hg/pullbundles.manifest`` is used.
96 ``.hg/pullbundles.manifest`` is used.
97
97
98 The manifest file contains a newline (\\n) delimited list of entries.
98 The manifest file contains a newline (\\n) delimited list of entries.
99
99
100 Each line in this file defines an available bundle. Lines have the format:
100 Each line in this file defines an available bundle. Lines have the format:
101
101
102 <URL> [<key>=<value>[ <key>=<value>]]
102 <URL> [<key>=<value>[ <key>=<value>]]
103
103
104 That is, a URL followed by an optional, space-delimited list of key=value
104 That is, a URL followed by an optional, space-delimited list of key=value
105 pairs describing additional properties of this bundle. Both keys and values
105 pairs describing additional properties of this bundle. Both keys and values
106 are URI encoded.
106 are URI encoded.
107
107
108 For pull bundles, the URL is a path under the ``.hg`` directory of the
108 For pull bundles, the URL is a path under the ``.hg`` directory of the
109 repository.
109 repository.
110
110
111 Keys in UPPERCASE are reserved for use by Mercurial and are defined below.
111 Keys in UPPERCASE are reserved for use by Mercurial and are defined below.
112 All non-uppercase keys can be used by site installations. An example use
112 All non-uppercase keys can be used by site installations. An example use
113 for custom properties is to use the *datacenter* attribute to define which
113 for custom properties is to use the *datacenter* attribute to define which
114 data center a file is hosted in. Clients could then prefer a server in the
114 data center a file is hosted in. Clients could then prefer a server in the
115 data center closest to them.
115 data center closest to them.
116
116
117 The following reserved keys are currently defined:
117 The following reserved keys are currently defined:
118
118
119 BUNDLESPEC
119 BUNDLESPEC
120 A "bundle specification" string that describes the type of the bundle.
120 A "bundle specification" string that describes the type of the bundle.
121
121
122 These are string values that are accepted by the "--type" argument of
122 These are string values that are accepted by the "--type" argument of
123 :hg:`bundle`.
123 :hg:`bundle`.
124
124
125 The values are parsed in strict mode, which means they must be of the
125 The values are parsed in strict mode, which means they must be of the
126 "<compression>-<type>" form. See
126 "<compression>-<type>" form. See
127 mercurial.exchange.parsebundlespec() for more details.
127 mercurial.exchange.parsebundlespec() for more details.
128
128
129 :hg:`debugbundle --spec` can be used to print the bundle specification
129 :hg:`debugbundle --spec` can be used to print the bundle specification
130 string for a bundle file. The output of this command can be used verbatim
130 string for a bundle file. The output of this command can be used verbatim
131 for the value of ``BUNDLESPEC`` (it is already escaped).
131 for the value of ``BUNDLESPEC`` (it is already escaped).
132
132
133 Clients will automatically filter out specifications that are unknown or
133 Clients will automatically filter out specifications that are unknown or
134 unsupported so they won't attempt to download something that likely won't
134 unsupported so they won't attempt to download something that likely won't
135 apply.
135 apply.
136
136
137 The actual value doesn't impact client behavior beyond filtering:
137 The actual value doesn't impact client behavior beyond filtering:
138 clients will still sniff the bundle type from the header of downloaded
138 clients will still sniff the bundle type from the header of downloaded
139 files.
139 files.
140
140
141 **Use of this key is highly recommended**, as it allows clients to
141 **Use of this key is highly recommended**, as it allows clients to
142 easily skip unsupported bundles. If this key is not defined, an old
142 easily skip unsupported bundles. If this key is not defined, an old
143 client may attempt to apply a bundle that it is incapable of reading.
143 client may attempt to apply a bundle that it is incapable of reading.
144
144
145 REQUIRESNI
145 REQUIRESNI
146 Whether Server Name Indication (SNI) is required to connect to the URL.
146 Whether Server Name Indication (SNI) is required to connect to the URL.
147 SNI allows servers to use multiple certificates on the same IP. It is
147 SNI allows servers to use multiple certificates on the same IP. It is
148 somewhat common in CDNs and other hosting providers. Older Python
148 somewhat common in CDNs and other hosting providers. Older Python
149 versions do not support SNI. Defining this attribute enables clients
149 versions do not support SNI. Defining this attribute enables clients
150 with older Python versions to filter this entry without experiencing
150 with older Python versions to filter this entry without experiencing
151 an opaque SSL failure at connection time.
151 an opaque SSL failure at connection time.
152
152
153 If this is defined, it is important to advertise a non-SNI fallback
153 If this is defined, it is important to advertise a non-SNI fallback
154 URL or clients running old Python releases may not be able to clone
154 URL or clients running old Python releases may not be able to clone
155 with the clonebundles facility.
155 with the clonebundles facility.
156
156
157 Value should be "true".
157 Value should be "true".
158
158
159 REQUIREDRAM
159 REQUIREDRAM
160 Value specifies expected memory requirements to decode the payload.
160 Value specifies expected memory requirements to decode the payload.
161 Values can have suffixes for common bytes sizes. e.g. "64MB".
161 Values can have suffixes for common bytes sizes. e.g. "64MB".
162
162
163 This key is often used with zstd-compressed bundles using a high
163 This key is often used with zstd-compressed bundles using a high
164 compression level / window size, which can require 100+ MB of memory
164 compression level / window size, which can require 100+ MB of memory
165 to decode.
165 to decode.
166
166
167 heads
167 heads
168 Used for pull bundles. This contains the ``;`` separated changeset
168 Used for pull bundles. This contains the ``;`` separated changeset
169 hashes of the heads of the bundle content.
169 hashes of the heads of the bundle content.
170
170
171 bases
171 bases
172 Used for pull bundles. This contains the ``;`` separated changeset
172 Used for pull bundles. This contains the ``;`` separated changeset
173 hashes of the roots of the bundle content. This can be skipped if
173 hashes of the roots of the bundle content. This can be skipped if
174 the bundle was created without ``--base``.
174 the bundle was created without ``--base``.
175
175
176 Manifests can contain multiple entries. Assuming metadata is defined, clients
176 Manifests can contain multiple entries. Assuming metadata is defined, clients
177 will filter entries from the manifest that they don't support. The remaining
177 will filter entries from the manifest that they don't support. The remaining
178 entries are optionally sorted by client preferences
178 entries are optionally sorted by client preferences
179 (``ui.clonebundleprefers`` config option). The client then attempts
179 (``ui.clonebundleprefers`` config option). The client then attempts
180 to fetch the bundle at the first URL in the remaining list.
180 to fetch the bundle at the first URL in the remaining list.
181
181
182 **Errors when downloading a bundle will fail the entire clone operation:
182 **Errors when downloading a bundle will fail the entire clone operation:
183 clients do not automatically fall back to a traditional clone.** The reason
183 clients do not automatically fall back to a traditional clone.** The reason
184 for this is that if a server is using clone bundles, it is probably doing so
184 for this is that if a server is using clone bundles, it is probably doing so
185 because the feature is necessary to help it scale. In other words, there
185 because the feature is necessary to help it scale. In other words, there
186 is an assumption that clone load will be offloaded to another service and
186 is an assumption that clone load will be offloaded to another service and
187 that the Mercurial server isn't responsible for serving this clone load.
187 that the Mercurial server isn't responsible for serving this clone load.
188 If that other service experiences issues and clients start mass falling back to
188 If that other service experiences issues and clients start mass falling back to
189 the original Mercurial server, the added clone load could overwhelm the server
189 the original Mercurial server, the added clone load could overwhelm the server
190 due to unexpected load and effectively take it offline. Not having clients
190 due to unexpected load and effectively take it offline. Not having clients
191 automatically fall back to cloning from the original server mitigates this
191 automatically fall back to cloning from the original server mitigates this
192 scenario.
192 scenario.
193
193
194 Because there is no automatic Mercurial server fallback on failure of the
194 Because there is no automatic Mercurial server fallback on failure of the
195 bundle hosting service, it is important for server operators to view the bundle
195 bundle hosting service, it is important for server operators to view the bundle
196 hosting service as an extension of the Mercurial server in terms of
196 hosting service as an extension of the Mercurial server in terms of
197 availability and service level agreements: if the bundle hosting service goes
197 availability and service level agreements: if the bundle hosting service goes
198 down, so does the ability for clients to clone. Note: clients will see a
198 down, so does the ability for clients to clone. Note: clients will see a
199 message informing them how to bypass the clone bundles facility when a failure
199 message informing them how to bypass the clone bundles facility when a failure
200 occurs. So server operators should prepare for some people to follow these
200 occurs. So server operators should prepare for some people to follow these
201 instructions when a failure occurs, thus driving more load to the original
201 instructions when a failure occurs, thus driving more load to the original
202 Mercurial server when the bundle hosting service fails.
202 Mercurial server when the bundle hosting service fails.
203 """
203 """
204
204
205 from __future__ import absolute_import
205 from __future__ import absolute_import
206
206
207 from mercurial import (
207 from mercurial import (
208 bundlecaches,
208 extensions,
209 extensions,
209 wireprotov1server,
210 wireprotov1server,
210 )
211 )
211
212
212 testedwith = b'ships-with-hg-core'
213 testedwith = b'ships-with-hg-core'
213
214
214
215
215 def capabilities(orig, repo, proto):
216 def capabilities(orig, repo, proto):
216 caps = orig(repo, proto)
217 caps = orig(repo, proto)
217
218
218 # Only advertise if a manifest exists. This does add some I/O to requests.
219 # Only advertise if a manifest exists. This does add some I/O to requests.
219 # But this should be cheaper than a wasted network round trip due to
220 # But this should be cheaper than a wasted network round trip due to
220 # missing file.
221 # missing file.
221 if repo.vfs.exists(b'clonebundles.manifest'):
222 if repo.vfs.exists(bundlecaches.CB_MANIFEST_FILE):
222 caps.append(b'clonebundles')
223 caps.append(b'clonebundles')
223
224
224 return caps
225 return caps
225
226
226
227
227 def extsetup(ui):
228 def extsetup(ui):
228 extensions.wrapfunction(wireprotov1server, b'_capabilities', capabilities)
229 extensions.wrapfunction(wireprotov1server, b'_capabilities', capabilities)
@@ -1,422 +1,424 b''
1 # bundlecaches.py - utility to deal with pre-computed bundle for servers
1 # bundlecaches.py - utility to deal with pre-computed bundle for servers
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5
5
6 from .i18n import _
6 from .i18n import _
7
7
8 from .thirdparty import attr
8 from .thirdparty import attr
9
9
10 from . import (
10 from . import (
11 error,
11 error,
12 sslutil,
12 sslutil,
13 util,
13 util,
14 )
14 )
15 from .utils import stringutil
15 from .utils import stringutil
16
16
17 urlreq = util.urlreq
17 urlreq = util.urlreq
18
18
19 CB_MANIFEST_FILE = b'clonebundles.manifest'
20
19
21
20 @attr.s
22 @attr.s
21 class bundlespec(object):
23 class bundlespec(object):
22 compression = attr.ib()
24 compression = attr.ib()
23 wirecompression = attr.ib()
25 wirecompression = attr.ib()
24 version = attr.ib()
26 version = attr.ib()
25 wireversion = attr.ib()
27 wireversion = attr.ib()
26 params = attr.ib()
28 params = attr.ib()
27 contentopts = attr.ib()
29 contentopts = attr.ib()
28
30
29
31
30 # Maps bundle version human names to changegroup versions.
32 # Maps bundle version human names to changegroup versions.
31 _bundlespeccgversions = {
33 _bundlespeccgversions = {
32 b'v1': b'01',
34 b'v1': b'01',
33 b'v2': b'02',
35 b'v2': b'02',
34 b'packed1': b's1',
36 b'packed1': b's1',
35 b'bundle2': b'02', # legacy
37 b'bundle2': b'02', # legacy
36 }
38 }
37
39
38 # Maps bundle version with content opts to choose which part to bundle
40 # Maps bundle version with content opts to choose which part to bundle
39 _bundlespeccontentopts = {
41 _bundlespeccontentopts = {
40 b'v1': {
42 b'v1': {
41 b'changegroup': True,
43 b'changegroup': True,
42 b'cg.version': b'01',
44 b'cg.version': b'01',
43 b'obsolescence': False,
45 b'obsolescence': False,
44 b'phases': False,
46 b'phases': False,
45 b'tagsfnodescache': False,
47 b'tagsfnodescache': False,
46 b'revbranchcache': False,
48 b'revbranchcache': False,
47 },
49 },
48 b'v2': {
50 b'v2': {
49 b'changegroup': True,
51 b'changegroup': True,
50 b'cg.version': b'02',
52 b'cg.version': b'02',
51 b'obsolescence': False,
53 b'obsolescence': False,
52 b'phases': False,
54 b'phases': False,
53 b'tagsfnodescache': True,
55 b'tagsfnodescache': True,
54 b'revbranchcache': True,
56 b'revbranchcache': True,
55 },
57 },
56 b'packed1': {b'cg.version': b's1'},
58 b'packed1': {b'cg.version': b's1'},
57 }
59 }
58 _bundlespeccontentopts[b'bundle2'] = _bundlespeccontentopts[b'v2']
60 _bundlespeccontentopts[b'bundle2'] = _bundlespeccontentopts[b'v2']
59
61
60 _bundlespecvariants = {
62 _bundlespecvariants = {
61 b"streamv2": {
63 b"streamv2": {
62 b"changegroup": False,
64 b"changegroup": False,
63 b"streamv2": True,
65 b"streamv2": True,
64 b"tagsfnodescache": False,
66 b"tagsfnodescache": False,
65 b"revbranchcache": False,
67 b"revbranchcache": False,
66 }
68 }
67 }
69 }
68
70
69 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
71 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
70 _bundlespecv1compengines = {b'gzip', b'bzip2', b'none'}
72 _bundlespecv1compengines = {b'gzip', b'bzip2', b'none'}
71
73
72
74
73 def parsebundlespec(repo, spec, strict=True):
75 def parsebundlespec(repo, spec, strict=True):
74 """Parse a bundle string specification into parts.
76 """Parse a bundle string specification into parts.
75
77
76 Bundle specifications denote a well-defined bundle/exchange format.
78 Bundle specifications denote a well-defined bundle/exchange format.
77 The content of a given specification should not change over time in
79 The content of a given specification should not change over time in
78 order to ensure that bundles produced by a newer version of Mercurial are
80 order to ensure that bundles produced by a newer version of Mercurial are
79 readable from an older version.
81 readable from an older version.
80
82
81 The string currently has the form:
83 The string currently has the form:
82
84
83 <compression>-<type>[;<parameter0>[;<parameter1>]]
85 <compression>-<type>[;<parameter0>[;<parameter1>]]
84
86
85 Where <compression> is one of the supported compression formats
87 Where <compression> is one of the supported compression formats
86 and <type> is (currently) a version string. A ";" can follow the type and
88 and <type> is (currently) a version string. A ";" can follow the type and
87 all text afterwards is interpreted as URI encoded, ";" delimited key=value
89 all text afterwards is interpreted as URI encoded, ";" delimited key=value
88 pairs.
90 pairs.
89
91
90 If ``strict`` is True (the default) <compression> is required. Otherwise,
92 If ``strict`` is True (the default) <compression> is required. Otherwise,
91 it is optional.
93 it is optional.
92
94
93 Returns a bundlespec object of (compression, version, parameters).
95 Returns a bundlespec object of (compression, version, parameters).
94 Compression will be ``None`` if not in strict mode and a compression isn't
96 Compression will be ``None`` if not in strict mode and a compression isn't
95 defined.
97 defined.
96
98
97 An ``InvalidBundleSpecification`` is raised when the specification is
99 An ``InvalidBundleSpecification`` is raised when the specification is
98 not syntactically well formed.
100 not syntactically well formed.
99
101
100 An ``UnsupportedBundleSpecification`` is raised when the compression or
102 An ``UnsupportedBundleSpecification`` is raised when the compression or
101 bundle type/version is not recognized.
103 bundle type/version is not recognized.
102
104
103 Note: this function will likely eventually return a more complex data
105 Note: this function will likely eventually return a more complex data
104 structure, including bundle2 part information.
106 structure, including bundle2 part information.
105 """
107 """
106
108
107 def parseparams(s):
109 def parseparams(s):
108 if b';' not in s:
110 if b';' not in s:
109 return s, {}
111 return s, {}
110
112
111 params = {}
113 params = {}
112 version, paramstr = s.split(b';', 1)
114 version, paramstr = s.split(b';', 1)
113
115
114 for p in paramstr.split(b';'):
116 for p in paramstr.split(b';'):
115 if b'=' not in p:
117 if b'=' not in p:
116 raise error.InvalidBundleSpecification(
118 raise error.InvalidBundleSpecification(
117 _(
119 _(
118 b'invalid bundle specification: '
120 b'invalid bundle specification: '
119 b'missing "=" in parameter: %s'
121 b'missing "=" in parameter: %s'
120 )
122 )
121 % p
123 % p
122 )
124 )
123
125
124 key, value = p.split(b'=', 1)
126 key, value = p.split(b'=', 1)
125 key = urlreq.unquote(key)
127 key = urlreq.unquote(key)
126 value = urlreq.unquote(value)
128 value = urlreq.unquote(value)
127 params[key] = value
129 params[key] = value
128
130
129 return version, params
131 return version, params
130
132
131 if strict and b'-' not in spec:
133 if strict and b'-' not in spec:
132 raise error.InvalidBundleSpecification(
134 raise error.InvalidBundleSpecification(
133 _(
135 _(
134 b'invalid bundle specification; '
136 b'invalid bundle specification; '
135 b'must be prefixed with compression: %s'
137 b'must be prefixed with compression: %s'
136 )
138 )
137 % spec
139 % spec
138 )
140 )
139
141
140 if b'-' in spec:
142 if b'-' in spec:
141 compression, version = spec.split(b'-', 1)
143 compression, version = spec.split(b'-', 1)
142
144
143 if compression not in util.compengines.supportedbundlenames:
145 if compression not in util.compengines.supportedbundlenames:
144 raise error.UnsupportedBundleSpecification(
146 raise error.UnsupportedBundleSpecification(
145 _(b'%s compression is not supported') % compression
147 _(b'%s compression is not supported') % compression
146 )
148 )
147
149
148 version, params = parseparams(version)
150 version, params = parseparams(version)
149
151
150 if version not in _bundlespeccgversions:
152 if version not in _bundlespeccgversions:
151 raise error.UnsupportedBundleSpecification(
153 raise error.UnsupportedBundleSpecification(
152 _(b'%s is not a recognized bundle version') % version
154 _(b'%s is not a recognized bundle version') % version
153 )
155 )
154 else:
156 else:
155 # Value could be just the compression or just the version, in which
157 # Value could be just the compression or just the version, in which
156 # case some defaults are assumed (but only when not in strict mode).
158 # case some defaults are assumed (but only when not in strict mode).
157 assert not strict
159 assert not strict
158
160
159 spec, params = parseparams(spec)
161 spec, params = parseparams(spec)
160
162
161 if spec in util.compengines.supportedbundlenames:
163 if spec in util.compengines.supportedbundlenames:
162 compression = spec
164 compression = spec
163 version = b'v1'
165 version = b'v1'
164 # Generaldelta repos require v2.
166 # Generaldelta repos require v2.
165 if b'generaldelta' in repo.requirements:
167 if b'generaldelta' in repo.requirements:
166 version = b'v2'
168 version = b'v2'
167 # Modern compression engines require v2.
169 # Modern compression engines require v2.
168 if compression not in _bundlespecv1compengines:
170 if compression not in _bundlespecv1compengines:
169 version = b'v2'
171 version = b'v2'
170 elif spec in _bundlespeccgversions:
172 elif spec in _bundlespeccgversions:
171 if spec == b'packed1':
173 if spec == b'packed1':
172 compression = b'none'
174 compression = b'none'
173 else:
175 else:
174 compression = b'bzip2'
176 compression = b'bzip2'
175 version = spec
177 version = spec
176 else:
178 else:
177 raise error.UnsupportedBundleSpecification(
179 raise error.UnsupportedBundleSpecification(
178 _(b'%s is not a recognized bundle specification') % spec
180 _(b'%s is not a recognized bundle specification') % spec
179 )
181 )
180
182
181 # Bundle version 1 only supports a known set of compression engines.
183 # Bundle version 1 only supports a known set of compression engines.
182 if version == b'v1' and compression not in _bundlespecv1compengines:
184 if version == b'v1' and compression not in _bundlespecv1compengines:
183 raise error.UnsupportedBundleSpecification(
185 raise error.UnsupportedBundleSpecification(
184 _(b'compression engine %s is not supported on v1 bundles')
186 _(b'compression engine %s is not supported on v1 bundles')
185 % compression
187 % compression
186 )
188 )
187
189
188 # The specification for packed1 can optionally declare the data formats
190 # The specification for packed1 can optionally declare the data formats
189 # required to apply it. If we see this metadata, compare against what the
191 # required to apply it. If we see this metadata, compare against what the
190 # repo supports and error if the bundle isn't compatible.
192 # repo supports and error if the bundle isn't compatible.
191 if version == b'packed1' and b'requirements' in params:
193 if version == b'packed1' and b'requirements' in params:
192 requirements = set(params[b'requirements'].split(b','))
194 requirements = set(params[b'requirements'].split(b','))
193 missingreqs = requirements - repo.supportedformats
195 missingreqs = requirements - repo.supportedformats
194 if missingreqs:
196 if missingreqs:
195 raise error.UnsupportedBundleSpecification(
197 raise error.UnsupportedBundleSpecification(
196 _(b'missing support for repository features: %s')
198 _(b'missing support for repository features: %s')
197 % b', '.join(sorted(missingreqs))
199 % b', '.join(sorted(missingreqs))
198 )
200 )
199
201
200 # Compute contentopts based on the version
202 # Compute contentopts based on the version
201 contentopts = _bundlespeccontentopts.get(version, {}).copy()
203 contentopts = _bundlespeccontentopts.get(version, {}).copy()
202
204
203 # Process the variants
205 # Process the variants
204 if b"stream" in params and params[b"stream"] == b"v2":
206 if b"stream" in params and params[b"stream"] == b"v2":
205 variant = _bundlespecvariants[b"streamv2"]
207 variant = _bundlespecvariants[b"streamv2"]
206 contentopts.update(variant)
208 contentopts.update(variant)
207
209
208 engine = util.compengines.forbundlename(compression)
210 engine = util.compengines.forbundlename(compression)
209 compression, wirecompression = engine.bundletype()
211 compression, wirecompression = engine.bundletype()
210 wireversion = _bundlespeccgversions[version]
212 wireversion = _bundlespeccgversions[version]
211
213
212 return bundlespec(
214 return bundlespec(
213 compression, wirecompression, version, wireversion, params, contentopts
215 compression, wirecompression, version, wireversion, params, contentopts
214 )
216 )
215
217
216
218
217 def parseclonebundlesmanifest(repo, s):
219 def parseclonebundlesmanifest(repo, s):
218 """Parses the raw text of a clone bundles manifest.
220 """Parses the raw text of a clone bundles manifest.
219
221
220 Returns a list of dicts. The dicts have a ``URL`` key corresponding
222 Returns a list of dicts. The dicts have a ``URL`` key corresponding
221 to the URL and other keys are the attributes for the entry.
223 to the URL and other keys are the attributes for the entry.
222 """
224 """
223 m = []
225 m = []
224 for line in s.splitlines():
226 for line in s.splitlines():
225 fields = line.split()
227 fields = line.split()
226 if not fields:
228 if not fields:
227 continue
229 continue
228 attrs = {b'URL': fields[0]}
230 attrs = {b'URL': fields[0]}
229 for rawattr in fields[1:]:
231 for rawattr in fields[1:]:
230 key, value = rawattr.split(b'=', 1)
232 key, value = rawattr.split(b'=', 1)
231 key = util.urlreq.unquote(key)
233 key = util.urlreq.unquote(key)
232 value = util.urlreq.unquote(value)
234 value = util.urlreq.unquote(value)
233 attrs[key] = value
235 attrs[key] = value
234
236
235 # Parse BUNDLESPEC into components. This makes client-side
237 # Parse BUNDLESPEC into components. This makes client-side
236 # preferences easier to specify since you can prefer a single
238 # preferences easier to specify since you can prefer a single
237 # component of the BUNDLESPEC.
239 # component of the BUNDLESPEC.
238 if key == b'BUNDLESPEC':
240 if key == b'BUNDLESPEC':
239 try:
241 try:
240 bundlespec = parsebundlespec(repo, value)
242 bundlespec = parsebundlespec(repo, value)
241 attrs[b'COMPRESSION'] = bundlespec.compression
243 attrs[b'COMPRESSION'] = bundlespec.compression
242 attrs[b'VERSION'] = bundlespec.version
244 attrs[b'VERSION'] = bundlespec.version
243 except error.InvalidBundleSpecification:
245 except error.InvalidBundleSpecification:
244 pass
246 pass
245 except error.UnsupportedBundleSpecification:
247 except error.UnsupportedBundleSpecification:
246 pass
248 pass
247
249
248 m.append(attrs)
250 m.append(attrs)
249
251
250 return m
252 return m
251
253
252
254
253 def isstreamclonespec(bundlespec):
255 def isstreamclonespec(bundlespec):
254 # Stream clone v1
256 # Stream clone v1
255 if bundlespec.wirecompression == b'UN' and bundlespec.wireversion == b's1':
257 if bundlespec.wirecompression == b'UN' and bundlespec.wireversion == b's1':
256 return True
258 return True
257
259
258 # Stream clone v2
260 # Stream clone v2
259 if (
261 if (
260 bundlespec.wirecompression == b'UN'
262 bundlespec.wirecompression == b'UN'
261 and bundlespec.wireversion == b'02'
263 and bundlespec.wireversion == b'02'
262 and bundlespec.contentopts.get(b'streamv2')
264 and bundlespec.contentopts.get(b'streamv2')
263 ):
265 ):
264 return True
266 return True
265
267
266 return False
268 return False
267
269
268
270
269 def filterclonebundleentries(repo, entries, streamclonerequested=False):
271 def filterclonebundleentries(repo, entries, streamclonerequested=False):
270 """Remove incompatible clone bundle manifest entries.
272 """Remove incompatible clone bundle manifest entries.
271
273
272 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
274 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
273 and returns a new list consisting of only the entries that this client
275 and returns a new list consisting of only the entries that this client
274 should be able to apply.
276 should be able to apply.
275
277
276 There is no guarantee we'll be able to apply all returned entries because
278 There is no guarantee we'll be able to apply all returned entries because
277 the metadata we use to filter on may be missing or wrong.
279 the metadata we use to filter on may be missing or wrong.
278 """
280 """
279 newentries = []
281 newentries = []
280 for entry in entries:
282 for entry in entries:
281 spec = entry.get(b'BUNDLESPEC')
283 spec = entry.get(b'BUNDLESPEC')
282 if spec:
284 if spec:
283 try:
285 try:
284 bundlespec = parsebundlespec(repo, spec, strict=True)
286 bundlespec = parsebundlespec(repo, spec, strict=True)
285
287
286 # If a stream clone was requested, filter out non-streamclone
288 # If a stream clone was requested, filter out non-streamclone
287 # entries.
289 # entries.
288 if streamclonerequested and not isstreamclonespec(bundlespec):
290 if streamclonerequested and not isstreamclonespec(bundlespec):
289 repo.ui.debug(
291 repo.ui.debug(
290 b'filtering %s because not a stream clone\n'
292 b'filtering %s because not a stream clone\n'
291 % entry[b'URL']
293 % entry[b'URL']
292 )
294 )
293 continue
295 continue
294
296
295 except error.InvalidBundleSpecification as e:
297 except error.InvalidBundleSpecification as e:
296 repo.ui.debug(stringutil.forcebytestr(e) + b'\n')
298 repo.ui.debug(stringutil.forcebytestr(e) + b'\n')
297 continue
299 continue
298 except error.UnsupportedBundleSpecification as e:
300 except error.UnsupportedBundleSpecification as e:
299 repo.ui.debug(
301 repo.ui.debug(
300 b'filtering %s because unsupported bundle '
302 b'filtering %s because unsupported bundle '
301 b'spec: %s\n' % (entry[b'URL'], stringutil.forcebytestr(e))
303 b'spec: %s\n' % (entry[b'URL'], stringutil.forcebytestr(e))
302 )
304 )
303 continue
305 continue
304 # If we don't have a spec and requested a stream clone, we don't know
306 # If we don't have a spec and requested a stream clone, we don't know
305 # what the entry is so don't attempt to apply it.
307 # what the entry is so don't attempt to apply it.
306 elif streamclonerequested:
308 elif streamclonerequested:
307 repo.ui.debug(
309 repo.ui.debug(
308 b'filtering %s because cannot determine if a stream '
310 b'filtering %s because cannot determine if a stream '
309 b'clone bundle\n' % entry[b'URL']
311 b'clone bundle\n' % entry[b'URL']
310 )
312 )
311 continue
313 continue
312
314
313 if b'REQUIRESNI' in entry and not sslutil.hassni:
315 if b'REQUIRESNI' in entry and not sslutil.hassni:
314 repo.ui.debug(
316 repo.ui.debug(
315 b'filtering %s because SNI not supported\n' % entry[b'URL']
317 b'filtering %s because SNI not supported\n' % entry[b'URL']
316 )
318 )
317 continue
319 continue
318
320
319 if b'REQUIREDRAM' in entry:
321 if b'REQUIREDRAM' in entry:
320 try:
322 try:
321 requiredram = util.sizetoint(entry[b'REQUIREDRAM'])
323 requiredram = util.sizetoint(entry[b'REQUIREDRAM'])
322 except error.ParseError:
324 except error.ParseError:
323 repo.ui.debug(
325 repo.ui.debug(
324 b'filtering %s due to a bad REQUIREDRAM attribute\n'
326 b'filtering %s due to a bad REQUIREDRAM attribute\n'
325 % entry[b'URL']
327 % entry[b'URL']
326 )
328 )
327 continue
329 continue
328 actualram = repo.ui.estimatememory()
330 actualram = repo.ui.estimatememory()
329 if actualram is not None and actualram * 0.66 < requiredram:
331 if actualram is not None and actualram * 0.66 < requiredram:
330 repo.ui.debug(
332 repo.ui.debug(
331 b'filtering %s as it needs more than 2/3 of system memory\n'
333 b'filtering %s as it needs more than 2/3 of system memory\n'
332 % entry[b'URL']
334 % entry[b'URL']
333 )
335 )
334 continue
336 continue
335
337
336 newentries.append(entry)
338 newentries.append(entry)
337
339
338 return newentries
340 return newentries
339
341
340
342
341 class clonebundleentry(object):
343 class clonebundleentry(object):
342 """Represents an item in a clone bundles manifest.
344 """Represents an item in a clone bundles manifest.
343
345
344 This rich class is needed to support sorting since sorted() in Python 3
346 This rich class is needed to support sorting since sorted() in Python 3
345 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
347 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
346 won't work.
348 won't work.
347 """
349 """
348
350
349 def __init__(self, value, prefers):
351 def __init__(self, value, prefers):
350 self.value = value
352 self.value = value
351 self.prefers = prefers
353 self.prefers = prefers
352
354
353 def _cmp(self, other):
355 def _cmp(self, other):
354 for prefkey, prefvalue in self.prefers:
356 for prefkey, prefvalue in self.prefers:
355 avalue = self.value.get(prefkey)
357 avalue = self.value.get(prefkey)
356 bvalue = other.value.get(prefkey)
358 bvalue = other.value.get(prefkey)
357
359
358 # Special case for b missing attribute and a matches exactly.
360 # Special case for b missing attribute and a matches exactly.
359 if avalue is not None and bvalue is None and avalue == prefvalue:
361 if avalue is not None and bvalue is None and avalue == prefvalue:
360 return -1
362 return -1
361
363
362 # Special case for a missing attribute and b matches exactly.
364 # Special case for a missing attribute and b matches exactly.
363 if bvalue is not None and avalue is None and bvalue == prefvalue:
365 if bvalue is not None and avalue is None and bvalue == prefvalue:
364 return 1
366 return 1
365
367
366 # We can't compare unless attribute present on both.
368 # We can't compare unless attribute present on both.
367 if avalue is None or bvalue is None:
369 if avalue is None or bvalue is None:
368 continue
370 continue
369
371
370 # Same values should fall back to next attribute.
372 # Same values should fall back to next attribute.
371 if avalue == bvalue:
373 if avalue == bvalue:
372 continue
374 continue
373
375
374 # Exact matches come first.
376 # Exact matches come first.
375 if avalue == prefvalue:
377 if avalue == prefvalue:
376 return -1
378 return -1
377 if bvalue == prefvalue:
379 if bvalue == prefvalue:
378 return 1
380 return 1
379
381
380 # Fall back to next attribute.
382 # Fall back to next attribute.
381 continue
383 continue
382
384
383 # If we got here we couldn't sort by attributes and prefers. Fall
385 # If we got here we couldn't sort by attributes and prefers. Fall
384 # back to index order.
386 # back to index order.
385 return 0
387 return 0
386
388
387 def __lt__(self, other):
389 def __lt__(self, other):
388 return self._cmp(other) < 0
390 return self._cmp(other) < 0
389
391
390 def __gt__(self, other):
392 def __gt__(self, other):
391 return self._cmp(other) > 0
393 return self._cmp(other) > 0
392
394
393 def __eq__(self, other):
395 def __eq__(self, other):
394 return self._cmp(other) == 0
396 return self._cmp(other) == 0
395
397
396 def __le__(self, other):
398 def __le__(self, other):
397 return self._cmp(other) <= 0
399 return self._cmp(other) <= 0
398
400
399 def __ge__(self, other):
401 def __ge__(self, other):
400 return self._cmp(other) >= 0
402 return self._cmp(other) >= 0
401
403
402 def __ne__(self, other):
404 def __ne__(self, other):
403 return self._cmp(other) != 0
405 return self._cmp(other) != 0
404
406
405
407
406 def sortclonebundleentries(ui, entries):
408 def sortclonebundleentries(ui, entries):
407 prefers = ui.configlist(b'ui', b'clonebundleprefers')
409 prefers = ui.configlist(b'ui', b'clonebundleprefers')
408 if not prefers:
410 if not prefers:
409 return list(entries)
411 return list(entries)
410
412
411 def _split(p):
413 def _split(p):
412 if b'=' not in p:
414 if b'=' not in p:
413 hint = _(b"each comma separated item should be key=value pairs")
415 hint = _(b"each comma separated item should be key=value pairs")
414 raise error.Abort(
416 raise error.Abort(
415 _(b"invalid ui.clonebundleprefers item: %s") % p, hint=hint
417 _(b"invalid ui.clonebundleprefers item: %s") % p, hint=hint
416 )
418 )
417 return p.split(b'=', 1)
419 return p.split(b'=', 1)
418
420
419 prefers = [_split(p) for p in prefers]
421 prefers = [_split(p) for p in prefers]
420
422
421 items = sorted(clonebundleentry(v, prefers) for v in entries)
423 items = sorted(clonebundleentry(v, prefers) for v in entries)
422 return [i.value for i in items]
424 return [i.value for i in items]
@@ -1,3562 +1,3563 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import functools
11 import functools
12 import os
12 import os
13 import random
13 import random
14 import sys
14 import sys
15 import time
15 import time
16 import weakref
16 import weakref
17
17
18 from .i18n import _
18 from .i18n import _
19 from .node import (
19 from .node import (
20 bin,
20 bin,
21 hex,
21 hex,
22 nullid,
22 nullid,
23 nullrev,
23 nullrev,
24 short,
24 short,
25 )
25 )
26 from .pycompat import (
26 from .pycompat import (
27 delattr,
27 delattr,
28 getattr,
28 getattr,
29 )
29 )
30 from . import (
30 from . import (
31 bookmarks,
31 bookmarks,
32 branchmap,
32 branchmap,
33 bundle2,
33 bundle2,
34 bundlecaches,
34 changegroup,
35 changegroup,
35 color,
36 color,
36 commit,
37 commit,
37 context,
38 context,
38 dirstate,
39 dirstate,
39 dirstateguard,
40 dirstateguard,
40 discovery,
41 discovery,
41 encoding,
42 encoding,
42 error,
43 error,
43 exchange,
44 exchange,
44 extensions,
45 extensions,
45 filelog,
46 filelog,
46 hook,
47 hook,
47 lock as lockmod,
48 lock as lockmod,
48 match as matchmod,
49 match as matchmod,
49 mergestate as mergestatemod,
50 mergestate as mergestatemod,
50 mergeutil,
51 mergeutil,
51 namespaces,
52 namespaces,
52 narrowspec,
53 narrowspec,
53 obsolete,
54 obsolete,
54 pathutil,
55 pathutil,
55 phases,
56 phases,
56 pushkey,
57 pushkey,
57 pycompat,
58 pycompat,
58 rcutil,
59 rcutil,
59 repoview,
60 repoview,
60 requirements as requirementsmod,
61 requirements as requirementsmod,
61 revset,
62 revset,
62 revsetlang,
63 revsetlang,
63 scmutil,
64 scmutil,
64 sparse,
65 sparse,
65 store as storemod,
66 store as storemod,
66 subrepoutil,
67 subrepoutil,
67 tags as tagsmod,
68 tags as tagsmod,
68 transaction,
69 transaction,
69 txnutil,
70 txnutil,
70 util,
71 util,
71 vfs as vfsmod,
72 vfs as vfsmod,
72 )
73 )
73
74
74 from .interfaces import (
75 from .interfaces import (
75 repository,
76 repository,
76 util as interfaceutil,
77 util as interfaceutil,
77 )
78 )
78
79
79 from .utils import (
80 from .utils import (
80 hashutil,
81 hashutil,
81 procutil,
82 procutil,
82 stringutil,
83 stringutil,
83 )
84 )
84
85
85 from .revlogutils import constants as revlogconst
86 from .revlogutils import constants as revlogconst
86
87
87 release = lockmod.release
88 release = lockmod.release
88 urlerr = util.urlerr
89 urlerr = util.urlerr
89 urlreq = util.urlreq
90 urlreq = util.urlreq
90
91
91 # set of (path, vfs-location) tuples. vfs-location is:
92 # set of (path, vfs-location) tuples. vfs-location is:
92 # - 'plain for vfs relative paths
93 # - 'plain for vfs relative paths
93 # - '' for svfs relative paths
94 # - '' for svfs relative paths
94 _cachedfiles = set()
95 _cachedfiles = set()
95
96
96
97
97 class _basefilecache(scmutil.filecache):
98 class _basefilecache(scmutil.filecache):
98 """All filecache usage on repo are done for logic that should be unfiltered
99 """All filecache usage on repo are done for logic that should be unfiltered
99 """
100 """
100
101
101 def __get__(self, repo, type=None):
102 def __get__(self, repo, type=None):
102 if repo is None:
103 if repo is None:
103 return self
104 return self
104 # proxy to unfiltered __dict__ since filtered repo has no entry
105 # proxy to unfiltered __dict__ since filtered repo has no entry
105 unfi = repo.unfiltered()
106 unfi = repo.unfiltered()
106 try:
107 try:
107 return unfi.__dict__[self.sname]
108 return unfi.__dict__[self.sname]
108 except KeyError:
109 except KeyError:
109 pass
110 pass
110 return super(_basefilecache, self).__get__(unfi, type)
111 return super(_basefilecache, self).__get__(unfi, type)
111
112
112 def set(self, repo, value):
113 def set(self, repo, value):
113 return super(_basefilecache, self).set(repo.unfiltered(), value)
114 return super(_basefilecache, self).set(repo.unfiltered(), value)
114
115
115
116
116 class repofilecache(_basefilecache):
117 class repofilecache(_basefilecache):
117 """filecache for files in .hg but outside of .hg/store"""
118 """filecache for files in .hg but outside of .hg/store"""
118
119
119 def __init__(self, *paths):
120 def __init__(self, *paths):
120 super(repofilecache, self).__init__(*paths)
121 super(repofilecache, self).__init__(*paths)
121 for path in paths:
122 for path in paths:
122 _cachedfiles.add((path, b'plain'))
123 _cachedfiles.add((path, b'plain'))
123
124
124 def join(self, obj, fname):
125 def join(self, obj, fname):
125 return obj.vfs.join(fname)
126 return obj.vfs.join(fname)
126
127
127
128
128 class storecache(_basefilecache):
129 class storecache(_basefilecache):
129 """filecache for files in the store"""
130 """filecache for files in the store"""
130
131
131 def __init__(self, *paths):
132 def __init__(self, *paths):
132 super(storecache, self).__init__(*paths)
133 super(storecache, self).__init__(*paths)
133 for path in paths:
134 for path in paths:
134 _cachedfiles.add((path, b''))
135 _cachedfiles.add((path, b''))
135
136
136 def join(self, obj, fname):
137 def join(self, obj, fname):
137 return obj.sjoin(fname)
138 return obj.sjoin(fname)
138
139
139
140
140 class mixedrepostorecache(_basefilecache):
141 class mixedrepostorecache(_basefilecache):
141 """filecache for a mix files in .hg/store and outside"""
142 """filecache for a mix files in .hg/store and outside"""
142
143
143 def __init__(self, *pathsandlocations):
144 def __init__(self, *pathsandlocations):
144 # scmutil.filecache only uses the path for passing back into our
145 # scmutil.filecache only uses the path for passing back into our
145 # join(), so we can safely pass a list of paths and locations
146 # join(), so we can safely pass a list of paths and locations
146 super(mixedrepostorecache, self).__init__(*pathsandlocations)
147 super(mixedrepostorecache, self).__init__(*pathsandlocations)
147 _cachedfiles.update(pathsandlocations)
148 _cachedfiles.update(pathsandlocations)
148
149
149 def join(self, obj, fnameandlocation):
150 def join(self, obj, fnameandlocation):
150 fname, location = fnameandlocation
151 fname, location = fnameandlocation
151 if location == b'plain':
152 if location == b'plain':
152 return obj.vfs.join(fname)
153 return obj.vfs.join(fname)
153 else:
154 else:
154 if location != b'':
155 if location != b'':
155 raise error.ProgrammingError(
156 raise error.ProgrammingError(
156 b'unexpected location: %s' % location
157 b'unexpected location: %s' % location
157 )
158 )
158 return obj.sjoin(fname)
159 return obj.sjoin(fname)
159
160
160
161
161 def isfilecached(repo, name):
162 def isfilecached(repo, name):
162 """check if a repo has already cached "name" filecache-ed property
163 """check if a repo has already cached "name" filecache-ed property
163
164
164 This returns (cachedobj-or-None, iscached) tuple.
165 This returns (cachedobj-or-None, iscached) tuple.
165 """
166 """
166 cacheentry = repo.unfiltered()._filecache.get(name, None)
167 cacheentry = repo.unfiltered()._filecache.get(name, None)
167 if not cacheentry:
168 if not cacheentry:
168 return None, False
169 return None, False
169 return cacheentry.obj, True
170 return cacheentry.obj, True
170
171
171
172
172 class unfilteredpropertycache(util.propertycache):
173 class unfilteredpropertycache(util.propertycache):
173 """propertycache that apply to unfiltered repo only"""
174 """propertycache that apply to unfiltered repo only"""
174
175
175 def __get__(self, repo, type=None):
176 def __get__(self, repo, type=None):
176 unfi = repo.unfiltered()
177 unfi = repo.unfiltered()
177 if unfi is repo:
178 if unfi is repo:
178 return super(unfilteredpropertycache, self).__get__(unfi)
179 return super(unfilteredpropertycache, self).__get__(unfi)
179 return getattr(unfi, self.name)
180 return getattr(unfi, self.name)
180
181
181
182
182 class filteredpropertycache(util.propertycache):
183 class filteredpropertycache(util.propertycache):
183 """propertycache that must take filtering in account"""
184 """propertycache that must take filtering in account"""
184
185
185 def cachevalue(self, obj, value):
186 def cachevalue(self, obj, value):
186 object.__setattr__(obj, self.name, value)
187 object.__setattr__(obj, self.name, value)
187
188
188
189
189 def hasunfilteredcache(repo, name):
190 def hasunfilteredcache(repo, name):
190 """check if a repo has an unfilteredpropertycache value for <name>"""
191 """check if a repo has an unfilteredpropertycache value for <name>"""
191 return name in vars(repo.unfiltered())
192 return name in vars(repo.unfiltered())
192
193
193
194
194 def unfilteredmethod(orig):
195 def unfilteredmethod(orig):
195 """decorate method that always need to be run on unfiltered version"""
196 """decorate method that always need to be run on unfiltered version"""
196
197
197 @functools.wraps(orig)
198 @functools.wraps(orig)
198 def wrapper(repo, *args, **kwargs):
199 def wrapper(repo, *args, **kwargs):
199 return orig(repo.unfiltered(), *args, **kwargs)
200 return orig(repo.unfiltered(), *args, **kwargs)
200
201
201 return wrapper
202 return wrapper
202
203
203
204
204 moderncaps = {
205 moderncaps = {
205 b'lookup',
206 b'lookup',
206 b'branchmap',
207 b'branchmap',
207 b'pushkey',
208 b'pushkey',
208 b'known',
209 b'known',
209 b'getbundle',
210 b'getbundle',
210 b'unbundle',
211 b'unbundle',
211 }
212 }
212 legacycaps = moderncaps.union({b'changegroupsubset'})
213 legacycaps = moderncaps.union({b'changegroupsubset'})
213
214
214
215
215 @interfaceutil.implementer(repository.ipeercommandexecutor)
216 @interfaceutil.implementer(repository.ipeercommandexecutor)
216 class localcommandexecutor(object):
217 class localcommandexecutor(object):
217 def __init__(self, peer):
218 def __init__(self, peer):
218 self._peer = peer
219 self._peer = peer
219 self._sent = False
220 self._sent = False
220 self._closed = False
221 self._closed = False
221
222
222 def __enter__(self):
223 def __enter__(self):
223 return self
224 return self
224
225
225 def __exit__(self, exctype, excvalue, exctb):
226 def __exit__(self, exctype, excvalue, exctb):
226 self.close()
227 self.close()
227
228
228 def callcommand(self, command, args):
229 def callcommand(self, command, args):
229 if self._sent:
230 if self._sent:
230 raise error.ProgrammingError(
231 raise error.ProgrammingError(
231 b'callcommand() cannot be used after sendcommands()'
232 b'callcommand() cannot be used after sendcommands()'
232 )
233 )
233
234
234 if self._closed:
235 if self._closed:
235 raise error.ProgrammingError(
236 raise error.ProgrammingError(
236 b'callcommand() cannot be used after close()'
237 b'callcommand() cannot be used after close()'
237 )
238 )
238
239
239 # We don't need to support anything fancy. Just call the named
240 # We don't need to support anything fancy. Just call the named
240 # method on the peer and return a resolved future.
241 # method on the peer and return a resolved future.
241 fn = getattr(self._peer, pycompat.sysstr(command))
242 fn = getattr(self._peer, pycompat.sysstr(command))
242
243
243 f = pycompat.futures.Future()
244 f = pycompat.futures.Future()
244
245
245 try:
246 try:
246 result = fn(**pycompat.strkwargs(args))
247 result = fn(**pycompat.strkwargs(args))
247 except Exception:
248 except Exception:
248 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
249 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
249 else:
250 else:
250 f.set_result(result)
251 f.set_result(result)
251
252
252 return f
253 return f
253
254
254 def sendcommands(self):
255 def sendcommands(self):
255 self._sent = True
256 self._sent = True
256
257
257 def close(self):
258 def close(self):
258 self._closed = True
259 self._closed = True
259
260
260
261
261 @interfaceutil.implementer(repository.ipeercommands)
262 @interfaceutil.implementer(repository.ipeercommands)
262 class localpeer(repository.peer):
263 class localpeer(repository.peer):
263 '''peer for a local repo; reflects only the most recent API'''
264 '''peer for a local repo; reflects only the most recent API'''
264
265
265 def __init__(self, repo, caps=None):
266 def __init__(self, repo, caps=None):
266 super(localpeer, self).__init__()
267 super(localpeer, self).__init__()
267
268
268 if caps is None:
269 if caps is None:
269 caps = moderncaps.copy()
270 caps = moderncaps.copy()
270 self._repo = repo.filtered(b'served')
271 self._repo = repo.filtered(b'served')
271 self.ui = repo.ui
272 self.ui = repo.ui
272 self._caps = repo._restrictcapabilities(caps)
273 self._caps = repo._restrictcapabilities(caps)
273
274
274 # Begin of _basepeer interface.
275 # Begin of _basepeer interface.
275
276
276 def url(self):
277 def url(self):
277 return self._repo.url()
278 return self._repo.url()
278
279
279 def local(self):
280 def local(self):
280 return self._repo
281 return self._repo
281
282
282 def peer(self):
283 def peer(self):
283 return self
284 return self
284
285
285 def canpush(self):
286 def canpush(self):
286 return True
287 return True
287
288
288 def close(self):
289 def close(self):
289 self._repo.close()
290 self._repo.close()
290
291
291 # End of _basepeer interface.
292 # End of _basepeer interface.
292
293
293 # Begin of _basewirecommands interface.
294 # Begin of _basewirecommands interface.
294
295
295 def branchmap(self):
296 def branchmap(self):
296 return self._repo.branchmap()
297 return self._repo.branchmap()
297
298
298 def capabilities(self):
299 def capabilities(self):
299 return self._caps
300 return self._caps
300
301
301 def clonebundles(self):
302 def clonebundles(self):
302 return self._repo.tryread(b'clonebundles.manifest')
303 return self._repo.tryread(bundlecaches.CB_MANIFEST_FILE)
303
304
304 def debugwireargs(self, one, two, three=None, four=None, five=None):
305 def debugwireargs(self, one, two, three=None, four=None, five=None):
305 """Used to test argument passing over the wire"""
306 """Used to test argument passing over the wire"""
306 return b"%s %s %s %s %s" % (
307 return b"%s %s %s %s %s" % (
307 one,
308 one,
308 two,
309 two,
309 pycompat.bytestr(three),
310 pycompat.bytestr(three),
310 pycompat.bytestr(four),
311 pycompat.bytestr(four),
311 pycompat.bytestr(five),
312 pycompat.bytestr(five),
312 )
313 )
313
314
314 def getbundle(
315 def getbundle(
315 self, source, heads=None, common=None, bundlecaps=None, **kwargs
316 self, source, heads=None, common=None, bundlecaps=None, **kwargs
316 ):
317 ):
317 chunks = exchange.getbundlechunks(
318 chunks = exchange.getbundlechunks(
318 self._repo,
319 self._repo,
319 source,
320 source,
320 heads=heads,
321 heads=heads,
321 common=common,
322 common=common,
322 bundlecaps=bundlecaps,
323 bundlecaps=bundlecaps,
323 **kwargs
324 **kwargs
324 )[1]
325 )[1]
325 cb = util.chunkbuffer(chunks)
326 cb = util.chunkbuffer(chunks)
326
327
327 if exchange.bundle2requested(bundlecaps):
328 if exchange.bundle2requested(bundlecaps):
328 # When requesting a bundle2, getbundle returns a stream to make the
329 # When requesting a bundle2, getbundle returns a stream to make the
329 # wire level function happier. We need to build a proper object
330 # wire level function happier. We need to build a proper object
330 # from it in local peer.
331 # from it in local peer.
331 return bundle2.getunbundler(self.ui, cb)
332 return bundle2.getunbundler(self.ui, cb)
332 else:
333 else:
333 return changegroup.getunbundler(b'01', cb, None)
334 return changegroup.getunbundler(b'01', cb, None)
334
335
335 def heads(self):
336 def heads(self):
336 return self._repo.heads()
337 return self._repo.heads()
337
338
338 def known(self, nodes):
339 def known(self, nodes):
339 return self._repo.known(nodes)
340 return self._repo.known(nodes)
340
341
341 def listkeys(self, namespace):
342 def listkeys(self, namespace):
342 return self._repo.listkeys(namespace)
343 return self._repo.listkeys(namespace)
343
344
344 def lookup(self, key):
345 def lookup(self, key):
345 return self._repo.lookup(key)
346 return self._repo.lookup(key)
346
347
347 def pushkey(self, namespace, key, old, new):
348 def pushkey(self, namespace, key, old, new):
348 return self._repo.pushkey(namespace, key, old, new)
349 return self._repo.pushkey(namespace, key, old, new)
349
350
350 def stream_out(self):
351 def stream_out(self):
351 raise error.Abort(_(b'cannot perform stream clone against local peer'))
352 raise error.Abort(_(b'cannot perform stream clone against local peer'))
352
353
353 def unbundle(self, bundle, heads, url):
354 def unbundle(self, bundle, heads, url):
354 """apply a bundle on a repo
355 """apply a bundle on a repo
355
356
356 This function handles the repo locking itself."""
357 This function handles the repo locking itself."""
357 try:
358 try:
358 try:
359 try:
359 bundle = exchange.readbundle(self.ui, bundle, None)
360 bundle = exchange.readbundle(self.ui, bundle, None)
360 ret = exchange.unbundle(self._repo, bundle, heads, b'push', url)
361 ret = exchange.unbundle(self._repo, bundle, heads, b'push', url)
361 if util.safehasattr(ret, b'getchunks'):
362 if util.safehasattr(ret, b'getchunks'):
362 # This is a bundle20 object, turn it into an unbundler.
363 # This is a bundle20 object, turn it into an unbundler.
363 # This little dance should be dropped eventually when the
364 # This little dance should be dropped eventually when the
364 # API is finally improved.
365 # API is finally improved.
365 stream = util.chunkbuffer(ret.getchunks())
366 stream = util.chunkbuffer(ret.getchunks())
366 ret = bundle2.getunbundler(self.ui, stream)
367 ret = bundle2.getunbundler(self.ui, stream)
367 return ret
368 return ret
368 except Exception as exc:
369 except Exception as exc:
369 # If the exception contains output salvaged from a bundle2
370 # If the exception contains output salvaged from a bundle2
370 # reply, we need to make sure it is printed before continuing
371 # reply, we need to make sure it is printed before continuing
371 # to fail. So we build a bundle2 with such output and consume
372 # to fail. So we build a bundle2 with such output and consume
372 # it directly.
373 # it directly.
373 #
374 #
374 # This is not very elegant but allows a "simple" solution for
375 # This is not very elegant but allows a "simple" solution for
375 # issue4594
376 # issue4594
376 output = getattr(exc, '_bundle2salvagedoutput', ())
377 output = getattr(exc, '_bundle2salvagedoutput', ())
377 if output:
378 if output:
378 bundler = bundle2.bundle20(self._repo.ui)
379 bundler = bundle2.bundle20(self._repo.ui)
379 for out in output:
380 for out in output:
380 bundler.addpart(out)
381 bundler.addpart(out)
381 stream = util.chunkbuffer(bundler.getchunks())
382 stream = util.chunkbuffer(bundler.getchunks())
382 b = bundle2.getunbundler(self.ui, stream)
383 b = bundle2.getunbundler(self.ui, stream)
383 bundle2.processbundle(self._repo, b)
384 bundle2.processbundle(self._repo, b)
384 raise
385 raise
385 except error.PushRaced as exc:
386 except error.PushRaced as exc:
386 raise error.ResponseError(
387 raise error.ResponseError(
387 _(b'push failed:'), stringutil.forcebytestr(exc)
388 _(b'push failed:'), stringutil.forcebytestr(exc)
388 )
389 )
389
390
390 # End of _basewirecommands interface.
391 # End of _basewirecommands interface.
391
392
392 # Begin of peer interface.
393 # Begin of peer interface.
393
394
394 def commandexecutor(self):
395 def commandexecutor(self):
395 return localcommandexecutor(self)
396 return localcommandexecutor(self)
396
397
397 # End of peer interface.
398 # End of peer interface.
398
399
399
400
400 @interfaceutil.implementer(repository.ipeerlegacycommands)
401 @interfaceutil.implementer(repository.ipeerlegacycommands)
401 class locallegacypeer(localpeer):
402 class locallegacypeer(localpeer):
402 '''peer extension which implements legacy methods too; used for tests with
403 '''peer extension which implements legacy methods too; used for tests with
403 restricted capabilities'''
404 restricted capabilities'''
404
405
405 def __init__(self, repo):
406 def __init__(self, repo):
406 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
407 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
407
408
408 # Begin of baselegacywirecommands interface.
409 # Begin of baselegacywirecommands interface.
409
410
410 def between(self, pairs):
411 def between(self, pairs):
411 return self._repo.between(pairs)
412 return self._repo.between(pairs)
412
413
413 def branches(self, nodes):
414 def branches(self, nodes):
414 return self._repo.branches(nodes)
415 return self._repo.branches(nodes)
415
416
416 def changegroup(self, nodes, source):
417 def changegroup(self, nodes, source):
417 outgoing = discovery.outgoing(
418 outgoing = discovery.outgoing(
418 self._repo, missingroots=nodes, ancestorsof=self._repo.heads()
419 self._repo, missingroots=nodes, ancestorsof=self._repo.heads()
419 )
420 )
420 return changegroup.makechangegroup(self._repo, outgoing, b'01', source)
421 return changegroup.makechangegroup(self._repo, outgoing, b'01', source)
421
422
422 def changegroupsubset(self, bases, heads, source):
423 def changegroupsubset(self, bases, heads, source):
423 outgoing = discovery.outgoing(
424 outgoing = discovery.outgoing(
424 self._repo, missingroots=bases, ancestorsof=heads
425 self._repo, missingroots=bases, ancestorsof=heads
425 )
426 )
426 return changegroup.makechangegroup(self._repo, outgoing, b'01', source)
427 return changegroup.makechangegroup(self._repo, outgoing, b'01', source)
427
428
428 # End of baselegacywirecommands interface.
429 # End of baselegacywirecommands interface.
429
430
430
431
431 # Functions receiving (ui, features) that extensions can register to impact
432 # Functions receiving (ui, features) that extensions can register to impact
432 # the ability to load repositories with custom requirements. Only
433 # the ability to load repositories with custom requirements. Only
433 # functions defined in loaded extensions are called.
434 # functions defined in loaded extensions are called.
434 #
435 #
435 # The function receives a set of requirement strings that the repository
436 # The function receives a set of requirement strings that the repository
436 # is capable of opening. Functions will typically add elements to the
437 # is capable of opening. Functions will typically add elements to the
437 # set to reflect that the extension knows how to handle that requirements.
438 # set to reflect that the extension knows how to handle that requirements.
438 featuresetupfuncs = set()
439 featuresetupfuncs = set()
439
440
440
441
441 def _getsharedvfs(hgvfs, requirements):
442 def _getsharedvfs(hgvfs, requirements):
442 """ returns the vfs object pointing to root of shared source
443 """ returns the vfs object pointing to root of shared source
443 repo for a shared repository
444 repo for a shared repository
444
445
445 hgvfs is vfs pointing at .hg/ of current repo (shared one)
446 hgvfs is vfs pointing at .hg/ of current repo (shared one)
446 requirements is a set of requirements of current repo (shared one)
447 requirements is a set of requirements of current repo (shared one)
447 """
448 """
448 # The ``shared`` or ``relshared`` requirements indicate the
449 # The ``shared`` or ``relshared`` requirements indicate the
449 # store lives in the path contained in the ``.hg/sharedpath`` file.
450 # store lives in the path contained in the ``.hg/sharedpath`` file.
450 # This is an absolute path for ``shared`` and relative to
451 # This is an absolute path for ``shared`` and relative to
451 # ``.hg/`` for ``relshared``.
452 # ``.hg/`` for ``relshared``.
452 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
453 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
453 if requirementsmod.RELATIVE_SHARED_REQUIREMENT in requirements:
454 if requirementsmod.RELATIVE_SHARED_REQUIREMENT in requirements:
454 sharedpath = hgvfs.join(sharedpath)
455 sharedpath = hgvfs.join(sharedpath)
455
456
456 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
457 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
457
458
458 if not sharedvfs.exists():
459 if not sharedvfs.exists():
459 raise error.RepoError(
460 raise error.RepoError(
460 _(b'.hg/sharedpath points to nonexistent directory %s')
461 _(b'.hg/sharedpath points to nonexistent directory %s')
461 % sharedvfs.base
462 % sharedvfs.base
462 )
463 )
463 return sharedvfs
464 return sharedvfs
464
465
465
466
466 def _readrequires(vfs, allowmissing):
467 def _readrequires(vfs, allowmissing):
467 """ reads the require file present at root of this vfs
468 """ reads the require file present at root of this vfs
468 and return a set of requirements
469 and return a set of requirements
469
470
470 If allowmissing is True, we suppress ENOENT if raised"""
471 If allowmissing is True, we suppress ENOENT if raised"""
471 # requires file contains a newline-delimited list of
472 # requires file contains a newline-delimited list of
472 # features/capabilities the opener (us) must have in order to use
473 # features/capabilities the opener (us) must have in order to use
473 # the repository. This file was introduced in Mercurial 0.9.2,
474 # the repository. This file was introduced in Mercurial 0.9.2,
474 # which means very old repositories may not have one. We assume
475 # which means very old repositories may not have one. We assume
475 # a missing file translates to no requirements.
476 # a missing file translates to no requirements.
476 try:
477 try:
477 requirements = set(vfs.read(b'requires').splitlines())
478 requirements = set(vfs.read(b'requires').splitlines())
478 except IOError as e:
479 except IOError as e:
479 if not (allowmissing and e.errno == errno.ENOENT):
480 if not (allowmissing and e.errno == errno.ENOENT):
480 raise
481 raise
481 requirements = set()
482 requirements = set()
482 return requirements
483 return requirements
483
484
484
485
485 def makelocalrepository(baseui, path, intents=None):
486 def makelocalrepository(baseui, path, intents=None):
486 """Create a local repository object.
487 """Create a local repository object.
487
488
488 Given arguments needed to construct a local repository, this function
489 Given arguments needed to construct a local repository, this function
489 performs various early repository loading functionality (such as
490 performs various early repository loading functionality (such as
490 reading the ``.hg/requires`` and ``.hg/hgrc`` files), validates that
491 reading the ``.hg/requires`` and ``.hg/hgrc`` files), validates that
491 the repository can be opened, derives a type suitable for representing
492 the repository can be opened, derives a type suitable for representing
492 that repository, and returns an instance of it.
493 that repository, and returns an instance of it.
493
494
494 The returned object conforms to the ``repository.completelocalrepository``
495 The returned object conforms to the ``repository.completelocalrepository``
495 interface.
496 interface.
496
497
497 The repository type is derived by calling a series of factory functions
498 The repository type is derived by calling a series of factory functions
498 for each aspect/interface of the final repository. These are defined by
499 for each aspect/interface of the final repository. These are defined by
499 ``REPO_INTERFACES``.
500 ``REPO_INTERFACES``.
500
501
501 Each factory function is called to produce a type implementing a specific
502 Each factory function is called to produce a type implementing a specific
502 interface. The cumulative list of returned types will be combined into a
503 interface. The cumulative list of returned types will be combined into a
503 new type and that type will be instantiated to represent the local
504 new type and that type will be instantiated to represent the local
504 repository.
505 repository.
505
506
506 The factory functions each receive various state that may be consulted
507 The factory functions each receive various state that may be consulted
507 as part of deriving a type.
508 as part of deriving a type.
508
509
509 Extensions should wrap these factory functions to customize repository type
510 Extensions should wrap these factory functions to customize repository type
510 creation. Note that an extension's wrapped function may be called even if
511 creation. Note that an extension's wrapped function may be called even if
511 that extension is not loaded for the repo being constructed. Extensions
512 that extension is not loaded for the repo being constructed. Extensions
512 should check if their ``__name__`` appears in the
513 should check if their ``__name__`` appears in the
513 ``extensionmodulenames`` set passed to the factory function and no-op if
514 ``extensionmodulenames`` set passed to the factory function and no-op if
514 not.
515 not.
515 """
516 """
516 ui = baseui.copy()
517 ui = baseui.copy()
517 # Prevent copying repo configuration.
518 # Prevent copying repo configuration.
518 ui.copy = baseui.copy
519 ui.copy = baseui.copy
519
520
520 # Working directory VFS rooted at repository root.
521 # Working directory VFS rooted at repository root.
521 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
522 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
522
523
523 # Main VFS for .hg/ directory.
524 # Main VFS for .hg/ directory.
524 hgpath = wdirvfs.join(b'.hg')
525 hgpath = wdirvfs.join(b'.hg')
525 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
526 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
526 # Whether this repository is shared one or not
527 # Whether this repository is shared one or not
527 shared = False
528 shared = False
528 # If this repository is shared, vfs pointing to shared repo
529 # If this repository is shared, vfs pointing to shared repo
529 sharedvfs = None
530 sharedvfs = None
530
531
531 # The .hg/ path should exist and should be a directory. All other
532 # The .hg/ path should exist and should be a directory. All other
532 # cases are errors.
533 # cases are errors.
533 if not hgvfs.isdir():
534 if not hgvfs.isdir():
534 try:
535 try:
535 hgvfs.stat()
536 hgvfs.stat()
536 except OSError as e:
537 except OSError as e:
537 if e.errno != errno.ENOENT:
538 if e.errno != errno.ENOENT:
538 raise
539 raise
539 except ValueError as e:
540 except ValueError as e:
540 # Can be raised on Python 3.8 when path is invalid.
541 # Can be raised on Python 3.8 when path is invalid.
541 raise error.Abort(
542 raise error.Abort(
542 _(b'invalid path %s: %s') % (path, pycompat.bytestr(e))
543 _(b'invalid path %s: %s') % (path, pycompat.bytestr(e))
543 )
544 )
544
545
545 raise error.RepoError(_(b'repository %s not found') % path)
546 raise error.RepoError(_(b'repository %s not found') % path)
546
547
547 requirements = _readrequires(hgvfs, True)
548 requirements = _readrequires(hgvfs, True)
548 shared = (
549 shared = (
549 requirementsmod.SHARED_REQUIREMENT in requirements
550 requirementsmod.SHARED_REQUIREMENT in requirements
550 or requirementsmod.RELATIVE_SHARED_REQUIREMENT in requirements
551 or requirementsmod.RELATIVE_SHARED_REQUIREMENT in requirements
551 )
552 )
552 if shared:
553 if shared:
553 sharedvfs = _getsharedvfs(hgvfs, requirements)
554 sharedvfs = _getsharedvfs(hgvfs, requirements)
554
555
555 # if .hg/requires contains the sharesafe requirement, it means
556 # if .hg/requires contains the sharesafe requirement, it means
556 # there exists a `.hg/store/requires` too and we should read it
557 # there exists a `.hg/store/requires` too and we should read it
557 # NOTE: presence of SHARESAFE_REQUIREMENT imply that store requirement
558 # NOTE: presence of SHARESAFE_REQUIREMENT imply that store requirement
558 # is present. We never write SHARESAFE_REQUIREMENT for a repo if store
559 # is present. We never write SHARESAFE_REQUIREMENT for a repo if store
559 # is not present, refer checkrequirementscompat() for that
560 # is not present, refer checkrequirementscompat() for that
560 if requirementsmod.SHARESAFE_REQUIREMENT in requirements:
561 if requirementsmod.SHARESAFE_REQUIREMENT in requirements:
561 if shared:
562 if shared:
562 # This is a shared repo
563 # This is a shared repo
563 storevfs = vfsmod.vfs(sharedvfs.join(b'store'))
564 storevfs = vfsmod.vfs(sharedvfs.join(b'store'))
564 else:
565 else:
565 storevfs = vfsmod.vfs(hgvfs.join(b'store'))
566 storevfs = vfsmod.vfs(hgvfs.join(b'store'))
566
567
567 requirements |= _readrequires(storevfs, False)
568 requirements |= _readrequires(storevfs, False)
568
569
569 # The .hg/hgrc file may load extensions or contain config options
570 # The .hg/hgrc file may load extensions or contain config options
570 # that influence repository construction. Attempt to load it and
571 # that influence repository construction. Attempt to load it and
571 # process any new extensions that it may have pulled in.
572 # process any new extensions that it may have pulled in.
572 if loadhgrc(ui, wdirvfs, hgvfs, requirements, sharedvfs):
573 if loadhgrc(ui, wdirvfs, hgvfs, requirements, sharedvfs):
573 afterhgrcload(ui, wdirvfs, hgvfs, requirements)
574 afterhgrcload(ui, wdirvfs, hgvfs, requirements)
574 extensions.loadall(ui)
575 extensions.loadall(ui)
575 extensions.populateui(ui)
576 extensions.populateui(ui)
576
577
577 # Set of module names of extensions loaded for this repository.
578 # Set of module names of extensions loaded for this repository.
578 extensionmodulenames = {m.__name__ for n, m in extensions.extensions(ui)}
579 extensionmodulenames = {m.__name__ for n, m in extensions.extensions(ui)}
579
580
580 supportedrequirements = gathersupportedrequirements(ui)
581 supportedrequirements = gathersupportedrequirements(ui)
581
582
582 # We first validate the requirements are known.
583 # We first validate the requirements are known.
583 ensurerequirementsrecognized(requirements, supportedrequirements)
584 ensurerequirementsrecognized(requirements, supportedrequirements)
584
585
585 # Then we validate that the known set is reasonable to use together.
586 # Then we validate that the known set is reasonable to use together.
586 ensurerequirementscompatible(ui, requirements)
587 ensurerequirementscompatible(ui, requirements)
587
588
588 # TODO there are unhandled edge cases related to opening repositories with
589 # TODO there are unhandled edge cases related to opening repositories with
589 # shared storage. If storage is shared, we should also test for requirements
590 # shared storage. If storage is shared, we should also test for requirements
590 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
591 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
591 # that repo, as that repo may load extensions needed to open it. This is a
592 # that repo, as that repo may load extensions needed to open it. This is a
592 # bit complicated because we don't want the other hgrc to overwrite settings
593 # bit complicated because we don't want the other hgrc to overwrite settings
593 # in this hgrc.
594 # in this hgrc.
594 #
595 #
595 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
596 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
596 # file when sharing repos. But if a requirement is added after the share is
597 # file when sharing repos. But if a requirement is added after the share is
597 # performed, thereby introducing a new requirement for the opener, we may
598 # performed, thereby introducing a new requirement for the opener, we may
598 # will not see that and could encounter a run-time error interacting with
599 # will not see that and could encounter a run-time error interacting with
599 # that shared store since it has an unknown-to-us requirement.
600 # that shared store since it has an unknown-to-us requirement.
600
601
601 # At this point, we know we should be capable of opening the repository.
602 # At this point, we know we should be capable of opening the repository.
602 # Now get on with doing that.
603 # Now get on with doing that.
603
604
604 features = set()
605 features = set()
605
606
606 # The "store" part of the repository holds versioned data. How it is
607 # The "store" part of the repository holds versioned data. How it is
607 # accessed is determined by various requirements. If `shared` or
608 # accessed is determined by various requirements. If `shared` or
608 # `relshared` requirements are present, this indicates current repository
609 # `relshared` requirements are present, this indicates current repository
609 # is a share and store exists in path mentioned in `.hg/sharedpath`
610 # is a share and store exists in path mentioned in `.hg/sharedpath`
610 if shared:
611 if shared:
611 storebasepath = sharedvfs.base
612 storebasepath = sharedvfs.base
612 cachepath = sharedvfs.join(b'cache')
613 cachepath = sharedvfs.join(b'cache')
613 features.add(repository.REPO_FEATURE_SHARED_STORAGE)
614 features.add(repository.REPO_FEATURE_SHARED_STORAGE)
614 else:
615 else:
615 storebasepath = hgvfs.base
616 storebasepath = hgvfs.base
616 cachepath = hgvfs.join(b'cache')
617 cachepath = hgvfs.join(b'cache')
617 wcachepath = hgvfs.join(b'wcache')
618 wcachepath = hgvfs.join(b'wcache')
618
619
619 # The store has changed over time and the exact layout is dictated by
620 # The store has changed over time and the exact layout is dictated by
620 # requirements. The store interface abstracts differences across all
621 # requirements. The store interface abstracts differences across all
621 # of them.
622 # of them.
622 store = makestore(
623 store = makestore(
623 requirements,
624 requirements,
624 storebasepath,
625 storebasepath,
625 lambda base: vfsmod.vfs(base, cacheaudited=True),
626 lambda base: vfsmod.vfs(base, cacheaudited=True),
626 )
627 )
627 hgvfs.createmode = store.createmode
628 hgvfs.createmode = store.createmode
628
629
629 storevfs = store.vfs
630 storevfs = store.vfs
630 storevfs.options = resolvestorevfsoptions(ui, requirements, features)
631 storevfs.options = resolvestorevfsoptions(ui, requirements, features)
631
632
632 # The cache vfs is used to manage cache files.
633 # The cache vfs is used to manage cache files.
633 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
634 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
634 cachevfs.createmode = store.createmode
635 cachevfs.createmode = store.createmode
635 # The cache vfs is used to manage cache files related to the working copy
636 # The cache vfs is used to manage cache files related to the working copy
636 wcachevfs = vfsmod.vfs(wcachepath, cacheaudited=True)
637 wcachevfs = vfsmod.vfs(wcachepath, cacheaudited=True)
637 wcachevfs.createmode = store.createmode
638 wcachevfs.createmode = store.createmode
638
639
639 # Now resolve the type for the repository object. We do this by repeatedly
640 # Now resolve the type for the repository object. We do this by repeatedly
640 # calling a factory function to produces types for specific aspects of the
641 # calling a factory function to produces types for specific aspects of the
641 # repo's operation. The aggregate returned types are used as base classes
642 # repo's operation. The aggregate returned types are used as base classes
642 # for a dynamically-derived type, which will represent our new repository.
643 # for a dynamically-derived type, which will represent our new repository.
643
644
644 bases = []
645 bases = []
645 extrastate = {}
646 extrastate = {}
646
647
647 for iface, fn in REPO_INTERFACES:
648 for iface, fn in REPO_INTERFACES:
648 # We pass all potentially useful state to give extensions tons of
649 # We pass all potentially useful state to give extensions tons of
649 # flexibility.
650 # flexibility.
650 typ = fn()(
651 typ = fn()(
651 ui=ui,
652 ui=ui,
652 intents=intents,
653 intents=intents,
653 requirements=requirements,
654 requirements=requirements,
654 features=features,
655 features=features,
655 wdirvfs=wdirvfs,
656 wdirvfs=wdirvfs,
656 hgvfs=hgvfs,
657 hgvfs=hgvfs,
657 store=store,
658 store=store,
658 storevfs=storevfs,
659 storevfs=storevfs,
659 storeoptions=storevfs.options,
660 storeoptions=storevfs.options,
660 cachevfs=cachevfs,
661 cachevfs=cachevfs,
661 wcachevfs=wcachevfs,
662 wcachevfs=wcachevfs,
662 extensionmodulenames=extensionmodulenames,
663 extensionmodulenames=extensionmodulenames,
663 extrastate=extrastate,
664 extrastate=extrastate,
664 baseclasses=bases,
665 baseclasses=bases,
665 )
666 )
666
667
667 if not isinstance(typ, type):
668 if not isinstance(typ, type):
668 raise error.ProgrammingError(
669 raise error.ProgrammingError(
669 b'unable to construct type for %s' % iface
670 b'unable to construct type for %s' % iface
670 )
671 )
671
672
672 bases.append(typ)
673 bases.append(typ)
673
674
674 # type() allows you to use characters in type names that wouldn't be
675 # type() allows you to use characters in type names that wouldn't be
675 # recognized as Python symbols in source code. We abuse that to add
676 # recognized as Python symbols in source code. We abuse that to add
676 # rich information about our constructed repo.
677 # rich information about our constructed repo.
677 name = pycompat.sysstr(
678 name = pycompat.sysstr(
678 b'derivedrepo:%s<%s>' % (wdirvfs.base, b','.join(sorted(requirements)))
679 b'derivedrepo:%s<%s>' % (wdirvfs.base, b','.join(sorted(requirements)))
679 )
680 )
680
681
681 cls = type(name, tuple(bases), {})
682 cls = type(name, tuple(bases), {})
682
683
683 return cls(
684 return cls(
684 baseui=baseui,
685 baseui=baseui,
685 ui=ui,
686 ui=ui,
686 origroot=path,
687 origroot=path,
687 wdirvfs=wdirvfs,
688 wdirvfs=wdirvfs,
688 hgvfs=hgvfs,
689 hgvfs=hgvfs,
689 requirements=requirements,
690 requirements=requirements,
690 supportedrequirements=supportedrequirements,
691 supportedrequirements=supportedrequirements,
691 sharedpath=storebasepath,
692 sharedpath=storebasepath,
692 store=store,
693 store=store,
693 cachevfs=cachevfs,
694 cachevfs=cachevfs,
694 wcachevfs=wcachevfs,
695 wcachevfs=wcachevfs,
695 features=features,
696 features=features,
696 intents=intents,
697 intents=intents,
697 )
698 )
698
699
699
700
700 def loadhgrc(ui, wdirvfs, hgvfs, requirements, sharedvfs=None):
701 def loadhgrc(ui, wdirvfs, hgvfs, requirements, sharedvfs=None):
701 """Load hgrc files/content into a ui instance.
702 """Load hgrc files/content into a ui instance.
702
703
703 This is called during repository opening to load any additional
704 This is called during repository opening to load any additional
704 config files or settings relevant to the current repository.
705 config files or settings relevant to the current repository.
705
706
706 Returns a bool indicating whether any additional configs were loaded.
707 Returns a bool indicating whether any additional configs were loaded.
707
708
708 Extensions should monkeypatch this function to modify how per-repo
709 Extensions should monkeypatch this function to modify how per-repo
709 configs are loaded. For example, an extension may wish to pull in
710 configs are loaded. For example, an extension may wish to pull in
710 configs from alternate files or sources.
711 configs from alternate files or sources.
711
712
712 sharedvfs is vfs object pointing to source repo if the current one is a
713 sharedvfs is vfs object pointing to source repo if the current one is a
713 shared one
714 shared one
714 """
715 """
715 if not rcutil.use_repo_hgrc():
716 if not rcutil.use_repo_hgrc():
716 return False
717 return False
717
718
718 ret = False
719 ret = False
719 # first load config from shared source if we has to
720 # first load config from shared source if we has to
720 if requirementsmod.SHARESAFE_REQUIREMENT in requirements and sharedvfs:
721 if requirementsmod.SHARESAFE_REQUIREMENT in requirements and sharedvfs:
721 try:
722 try:
722 ui.readconfig(sharedvfs.join(b'hgrc'), root=sharedvfs.base)
723 ui.readconfig(sharedvfs.join(b'hgrc'), root=sharedvfs.base)
723 ret = True
724 ret = True
724 except IOError:
725 except IOError:
725 pass
726 pass
726
727
727 try:
728 try:
728 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
729 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
729 ret = True
730 ret = True
730 except IOError:
731 except IOError:
731 pass
732 pass
732
733
733 try:
734 try:
734 ui.readconfig(hgvfs.join(b'hgrc-not-shared'), root=wdirvfs.base)
735 ui.readconfig(hgvfs.join(b'hgrc-not-shared'), root=wdirvfs.base)
735 ret = True
736 ret = True
736 except IOError:
737 except IOError:
737 pass
738 pass
738
739
739 return ret
740 return ret
740
741
741
742
742 def afterhgrcload(ui, wdirvfs, hgvfs, requirements):
743 def afterhgrcload(ui, wdirvfs, hgvfs, requirements):
743 """Perform additional actions after .hg/hgrc is loaded.
744 """Perform additional actions after .hg/hgrc is loaded.
744
745
745 This function is called during repository loading immediately after
746 This function is called during repository loading immediately after
746 the .hg/hgrc file is loaded and before per-repo extensions are loaded.
747 the .hg/hgrc file is loaded and before per-repo extensions are loaded.
747
748
748 The function can be used to validate configs, automatically add
749 The function can be used to validate configs, automatically add
749 options (including extensions) based on requirements, etc.
750 options (including extensions) based on requirements, etc.
750 """
751 """
751
752
752 # Map of requirements to list of extensions to load automatically when
753 # Map of requirements to list of extensions to load automatically when
753 # requirement is present.
754 # requirement is present.
754 autoextensions = {
755 autoextensions = {
755 b'git': [b'git'],
756 b'git': [b'git'],
756 b'largefiles': [b'largefiles'],
757 b'largefiles': [b'largefiles'],
757 b'lfs': [b'lfs'],
758 b'lfs': [b'lfs'],
758 }
759 }
759
760
760 for requirement, names in sorted(autoextensions.items()):
761 for requirement, names in sorted(autoextensions.items()):
761 if requirement not in requirements:
762 if requirement not in requirements:
762 continue
763 continue
763
764
764 for name in names:
765 for name in names:
765 if not ui.hasconfig(b'extensions', name):
766 if not ui.hasconfig(b'extensions', name):
766 ui.setconfig(b'extensions', name, b'', source=b'autoload')
767 ui.setconfig(b'extensions', name, b'', source=b'autoload')
767
768
768
769
769 def gathersupportedrequirements(ui):
770 def gathersupportedrequirements(ui):
770 """Determine the complete set of recognized requirements."""
771 """Determine the complete set of recognized requirements."""
771 # Start with all requirements supported by this file.
772 # Start with all requirements supported by this file.
772 supported = set(localrepository._basesupported)
773 supported = set(localrepository._basesupported)
773
774
774 # Execute ``featuresetupfuncs`` entries if they belong to an extension
775 # Execute ``featuresetupfuncs`` entries if they belong to an extension
775 # relevant to this ui instance.
776 # relevant to this ui instance.
776 modules = {m.__name__ for n, m in extensions.extensions(ui)}
777 modules = {m.__name__ for n, m in extensions.extensions(ui)}
777
778
778 for fn in featuresetupfuncs:
779 for fn in featuresetupfuncs:
779 if fn.__module__ in modules:
780 if fn.__module__ in modules:
780 fn(ui, supported)
781 fn(ui, supported)
781
782
782 # Add derived requirements from registered compression engines.
783 # Add derived requirements from registered compression engines.
783 for name in util.compengines:
784 for name in util.compengines:
784 engine = util.compengines[name]
785 engine = util.compengines[name]
785 if engine.available() and engine.revlogheader():
786 if engine.available() and engine.revlogheader():
786 supported.add(b'exp-compression-%s' % name)
787 supported.add(b'exp-compression-%s' % name)
787 if engine.name() == b'zstd':
788 if engine.name() == b'zstd':
788 supported.add(b'revlog-compression-zstd')
789 supported.add(b'revlog-compression-zstd')
789
790
790 return supported
791 return supported
791
792
792
793
793 def ensurerequirementsrecognized(requirements, supported):
794 def ensurerequirementsrecognized(requirements, supported):
794 """Validate that a set of local requirements is recognized.
795 """Validate that a set of local requirements is recognized.
795
796
796 Receives a set of requirements. Raises an ``error.RepoError`` if there
797 Receives a set of requirements. Raises an ``error.RepoError`` if there
797 exists any requirement in that set that currently loaded code doesn't
798 exists any requirement in that set that currently loaded code doesn't
798 recognize.
799 recognize.
799
800
800 Returns a set of supported requirements.
801 Returns a set of supported requirements.
801 """
802 """
802 missing = set()
803 missing = set()
803
804
804 for requirement in requirements:
805 for requirement in requirements:
805 if requirement in supported:
806 if requirement in supported:
806 continue
807 continue
807
808
808 if not requirement or not requirement[0:1].isalnum():
809 if not requirement or not requirement[0:1].isalnum():
809 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
810 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
810
811
811 missing.add(requirement)
812 missing.add(requirement)
812
813
813 if missing:
814 if missing:
814 raise error.RequirementError(
815 raise error.RequirementError(
815 _(b'repository requires features unknown to this Mercurial: %s')
816 _(b'repository requires features unknown to this Mercurial: %s')
816 % b' '.join(sorted(missing)),
817 % b' '.join(sorted(missing)),
817 hint=_(
818 hint=_(
818 b'see https://mercurial-scm.org/wiki/MissingRequirement '
819 b'see https://mercurial-scm.org/wiki/MissingRequirement '
819 b'for more information'
820 b'for more information'
820 ),
821 ),
821 )
822 )
822
823
823
824
824 def ensurerequirementscompatible(ui, requirements):
825 def ensurerequirementscompatible(ui, requirements):
825 """Validates that a set of recognized requirements is mutually compatible.
826 """Validates that a set of recognized requirements is mutually compatible.
826
827
827 Some requirements may not be compatible with others or require
828 Some requirements may not be compatible with others or require
828 config options that aren't enabled. This function is called during
829 config options that aren't enabled. This function is called during
829 repository opening to ensure that the set of requirements needed
830 repository opening to ensure that the set of requirements needed
830 to open a repository is sane and compatible with config options.
831 to open a repository is sane and compatible with config options.
831
832
832 Extensions can monkeypatch this function to perform additional
833 Extensions can monkeypatch this function to perform additional
833 checking.
834 checking.
834
835
835 ``error.RepoError`` should be raised on failure.
836 ``error.RepoError`` should be raised on failure.
836 """
837 """
837 if (
838 if (
838 requirementsmod.SPARSE_REQUIREMENT in requirements
839 requirementsmod.SPARSE_REQUIREMENT in requirements
839 and not sparse.enabled
840 and not sparse.enabled
840 ):
841 ):
841 raise error.RepoError(
842 raise error.RepoError(
842 _(
843 _(
843 b'repository is using sparse feature but '
844 b'repository is using sparse feature but '
844 b'sparse is not enabled; enable the '
845 b'sparse is not enabled; enable the '
845 b'"sparse" extensions to access'
846 b'"sparse" extensions to access'
846 )
847 )
847 )
848 )
848
849
849
850
850 def makestore(requirements, path, vfstype):
851 def makestore(requirements, path, vfstype):
851 """Construct a storage object for a repository."""
852 """Construct a storage object for a repository."""
852 if b'store' in requirements:
853 if b'store' in requirements:
853 if b'fncache' in requirements:
854 if b'fncache' in requirements:
854 return storemod.fncachestore(
855 return storemod.fncachestore(
855 path, vfstype, b'dotencode' in requirements
856 path, vfstype, b'dotencode' in requirements
856 )
857 )
857
858
858 return storemod.encodedstore(path, vfstype)
859 return storemod.encodedstore(path, vfstype)
859
860
860 return storemod.basicstore(path, vfstype)
861 return storemod.basicstore(path, vfstype)
861
862
862
863
863 def resolvestorevfsoptions(ui, requirements, features):
864 def resolvestorevfsoptions(ui, requirements, features):
864 """Resolve the options to pass to the store vfs opener.
865 """Resolve the options to pass to the store vfs opener.
865
866
866 The returned dict is used to influence behavior of the storage layer.
867 The returned dict is used to influence behavior of the storage layer.
867 """
868 """
868 options = {}
869 options = {}
869
870
870 if requirementsmod.TREEMANIFEST_REQUIREMENT in requirements:
871 if requirementsmod.TREEMANIFEST_REQUIREMENT in requirements:
871 options[b'treemanifest'] = True
872 options[b'treemanifest'] = True
872
873
873 # experimental config: format.manifestcachesize
874 # experimental config: format.manifestcachesize
874 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
875 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
875 if manifestcachesize is not None:
876 if manifestcachesize is not None:
876 options[b'manifestcachesize'] = manifestcachesize
877 options[b'manifestcachesize'] = manifestcachesize
877
878
878 # In the absence of another requirement superseding a revlog-related
879 # In the absence of another requirement superseding a revlog-related
879 # requirement, we have to assume the repo is using revlog version 0.
880 # requirement, we have to assume the repo is using revlog version 0.
880 # This revlog format is super old and we don't bother trying to parse
881 # This revlog format is super old and we don't bother trying to parse
881 # opener options for it because those options wouldn't do anything
882 # opener options for it because those options wouldn't do anything
882 # meaningful on such old repos.
883 # meaningful on such old repos.
883 if (
884 if (
884 b'revlogv1' in requirements
885 b'revlogv1' in requirements
885 or requirementsmod.REVLOGV2_REQUIREMENT in requirements
886 or requirementsmod.REVLOGV2_REQUIREMENT in requirements
886 ):
887 ):
887 options.update(resolverevlogstorevfsoptions(ui, requirements, features))
888 options.update(resolverevlogstorevfsoptions(ui, requirements, features))
888 else: # explicitly mark repo as using revlogv0
889 else: # explicitly mark repo as using revlogv0
889 options[b'revlogv0'] = True
890 options[b'revlogv0'] = True
890
891
891 if requirementsmod.COPIESSDC_REQUIREMENT in requirements:
892 if requirementsmod.COPIESSDC_REQUIREMENT in requirements:
892 options[b'copies-storage'] = b'changeset-sidedata'
893 options[b'copies-storage'] = b'changeset-sidedata'
893 else:
894 else:
894 writecopiesto = ui.config(b'experimental', b'copies.write-to')
895 writecopiesto = ui.config(b'experimental', b'copies.write-to')
895 copiesextramode = (b'changeset-only', b'compatibility')
896 copiesextramode = (b'changeset-only', b'compatibility')
896 if writecopiesto in copiesextramode:
897 if writecopiesto in copiesextramode:
897 options[b'copies-storage'] = b'extra'
898 options[b'copies-storage'] = b'extra'
898
899
899 return options
900 return options
900
901
901
902
902 def resolverevlogstorevfsoptions(ui, requirements, features):
903 def resolverevlogstorevfsoptions(ui, requirements, features):
903 """Resolve opener options specific to revlogs."""
904 """Resolve opener options specific to revlogs."""
904
905
905 options = {}
906 options = {}
906 options[b'flagprocessors'] = {}
907 options[b'flagprocessors'] = {}
907
908
908 if b'revlogv1' in requirements:
909 if b'revlogv1' in requirements:
909 options[b'revlogv1'] = True
910 options[b'revlogv1'] = True
910 if requirementsmod.REVLOGV2_REQUIREMENT in requirements:
911 if requirementsmod.REVLOGV2_REQUIREMENT in requirements:
911 options[b'revlogv2'] = True
912 options[b'revlogv2'] = True
912
913
913 if b'generaldelta' in requirements:
914 if b'generaldelta' in requirements:
914 options[b'generaldelta'] = True
915 options[b'generaldelta'] = True
915
916
916 # experimental config: format.chunkcachesize
917 # experimental config: format.chunkcachesize
917 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
918 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
918 if chunkcachesize is not None:
919 if chunkcachesize is not None:
919 options[b'chunkcachesize'] = chunkcachesize
920 options[b'chunkcachesize'] = chunkcachesize
920
921
921 deltabothparents = ui.configbool(
922 deltabothparents = ui.configbool(
922 b'storage', b'revlog.optimize-delta-parent-choice'
923 b'storage', b'revlog.optimize-delta-parent-choice'
923 )
924 )
924 options[b'deltabothparents'] = deltabothparents
925 options[b'deltabothparents'] = deltabothparents
925
926
926 lazydelta = ui.configbool(b'storage', b'revlog.reuse-external-delta')
927 lazydelta = ui.configbool(b'storage', b'revlog.reuse-external-delta')
927 lazydeltabase = False
928 lazydeltabase = False
928 if lazydelta:
929 if lazydelta:
929 lazydeltabase = ui.configbool(
930 lazydeltabase = ui.configbool(
930 b'storage', b'revlog.reuse-external-delta-parent'
931 b'storage', b'revlog.reuse-external-delta-parent'
931 )
932 )
932 if lazydeltabase is None:
933 if lazydeltabase is None:
933 lazydeltabase = not scmutil.gddeltaconfig(ui)
934 lazydeltabase = not scmutil.gddeltaconfig(ui)
934 options[b'lazydelta'] = lazydelta
935 options[b'lazydelta'] = lazydelta
935 options[b'lazydeltabase'] = lazydeltabase
936 options[b'lazydeltabase'] = lazydeltabase
936
937
937 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
938 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
938 if 0 <= chainspan:
939 if 0 <= chainspan:
939 options[b'maxdeltachainspan'] = chainspan
940 options[b'maxdeltachainspan'] = chainspan
940
941
941 mmapindexthreshold = ui.configbytes(b'experimental', b'mmapindexthreshold')
942 mmapindexthreshold = ui.configbytes(b'experimental', b'mmapindexthreshold')
942 if mmapindexthreshold is not None:
943 if mmapindexthreshold is not None:
943 options[b'mmapindexthreshold'] = mmapindexthreshold
944 options[b'mmapindexthreshold'] = mmapindexthreshold
944
945
945 withsparseread = ui.configbool(b'experimental', b'sparse-read')
946 withsparseread = ui.configbool(b'experimental', b'sparse-read')
946 srdensitythres = float(
947 srdensitythres = float(
947 ui.config(b'experimental', b'sparse-read.density-threshold')
948 ui.config(b'experimental', b'sparse-read.density-threshold')
948 )
949 )
949 srmingapsize = ui.configbytes(b'experimental', b'sparse-read.min-gap-size')
950 srmingapsize = ui.configbytes(b'experimental', b'sparse-read.min-gap-size')
950 options[b'with-sparse-read'] = withsparseread
951 options[b'with-sparse-read'] = withsparseread
951 options[b'sparse-read-density-threshold'] = srdensitythres
952 options[b'sparse-read-density-threshold'] = srdensitythres
952 options[b'sparse-read-min-gap-size'] = srmingapsize
953 options[b'sparse-read-min-gap-size'] = srmingapsize
953
954
954 sparserevlog = requirementsmod.SPARSEREVLOG_REQUIREMENT in requirements
955 sparserevlog = requirementsmod.SPARSEREVLOG_REQUIREMENT in requirements
955 options[b'sparse-revlog'] = sparserevlog
956 options[b'sparse-revlog'] = sparserevlog
956 if sparserevlog:
957 if sparserevlog:
957 options[b'generaldelta'] = True
958 options[b'generaldelta'] = True
958
959
959 sidedata = requirementsmod.SIDEDATA_REQUIREMENT in requirements
960 sidedata = requirementsmod.SIDEDATA_REQUIREMENT in requirements
960 options[b'side-data'] = sidedata
961 options[b'side-data'] = sidedata
961
962
962 maxchainlen = None
963 maxchainlen = None
963 if sparserevlog:
964 if sparserevlog:
964 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
965 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
965 # experimental config: format.maxchainlen
966 # experimental config: format.maxchainlen
966 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
967 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
967 if maxchainlen is not None:
968 if maxchainlen is not None:
968 options[b'maxchainlen'] = maxchainlen
969 options[b'maxchainlen'] = maxchainlen
969
970
970 for r in requirements:
971 for r in requirements:
971 # we allow multiple compression engine requirement to co-exist because
972 # we allow multiple compression engine requirement to co-exist because
972 # strickly speaking, revlog seems to support mixed compression style.
973 # strickly speaking, revlog seems to support mixed compression style.
973 #
974 #
974 # The compression used for new entries will be "the last one"
975 # The compression used for new entries will be "the last one"
975 prefix = r.startswith
976 prefix = r.startswith
976 if prefix(b'revlog-compression-') or prefix(b'exp-compression-'):
977 if prefix(b'revlog-compression-') or prefix(b'exp-compression-'):
977 options[b'compengine'] = r.split(b'-', 2)[2]
978 options[b'compengine'] = r.split(b'-', 2)[2]
978
979
979 options[b'zlib.level'] = ui.configint(b'storage', b'revlog.zlib.level')
980 options[b'zlib.level'] = ui.configint(b'storage', b'revlog.zlib.level')
980 if options[b'zlib.level'] is not None:
981 if options[b'zlib.level'] is not None:
981 if not (0 <= options[b'zlib.level'] <= 9):
982 if not (0 <= options[b'zlib.level'] <= 9):
982 msg = _(b'invalid value for `storage.revlog.zlib.level` config: %d')
983 msg = _(b'invalid value for `storage.revlog.zlib.level` config: %d')
983 raise error.Abort(msg % options[b'zlib.level'])
984 raise error.Abort(msg % options[b'zlib.level'])
984 options[b'zstd.level'] = ui.configint(b'storage', b'revlog.zstd.level')
985 options[b'zstd.level'] = ui.configint(b'storage', b'revlog.zstd.level')
985 if options[b'zstd.level'] is not None:
986 if options[b'zstd.level'] is not None:
986 if not (0 <= options[b'zstd.level'] <= 22):
987 if not (0 <= options[b'zstd.level'] <= 22):
987 msg = _(b'invalid value for `storage.revlog.zstd.level` config: %d')
988 msg = _(b'invalid value for `storage.revlog.zstd.level` config: %d')
988 raise error.Abort(msg % options[b'zstd.level'])
989 raise error.Abort(msg % options[b'zstd.level'])
989
990
990 if requirementsmod.NARROW_REQUIREMENT in requirements:
991 if requirementsmod.NARROW_REQUIREMENT in requirements:
991 options[b'enableellipsis'] = True
992 options[b'enableellipsis'] = True
992
993
993 if ui.configbool(b'experimental', b'rust.index'):
994 if ui.configbool(b'experimental', b'rust.index'):
994 options[b'rust.index'] = True
995 options[b'rust.index'] = True
995 if requirementsmod.NODEMAP_REQUIREMENT in requirements:
996 if requirementsmod.NODEMAP_REQUIREMENT in requirements:
996 options[b'persistent-nodemap'] = True
997 options[b'persistent-nodemap'] = True
997 if ui.configbool(b'storage', b'revlog.nodemap.mmap'):
998 if ui.configbool(b'storage', b'revlog.nodemap.mmap'):
998 options[b'persistent-nodemap.mmap'] = True
999 options[b'persistent-nodemap.mmap'] = True
999 epnm = ui.config(b'storage', b'revlog.nodemap.mode')
1000 epnm = ui.config(b'storage', b'revlog.nodemap.mode')
1000 options[b'persistent-nodemap.mode'] = epnm
1001 options[b'persistent-nodemap.mode'] = epnm
1001 if ui.configbool(b'devel', b'persistent-nodemap'):
1002 if ui.configbool(b'devel', b'persistent-nodemap'):
1002 options[b'devel-force-nodemap'] = True
1003 options[b'devel-force-nodemap'] = True
1003
1004
1004 return options
1005 return options
1005
1006
1006
1007
1007 def makemain(**kwargs):
1008 def makemain(**kwargs):
1008 """Produce a type conforming to ``ilocalrepositorymain``."""
1009 """Produce a type conforming to ``ilocalrepositorymain``."""
1009 return localrepository
1010 return localrepository
1010
1011
1011
1012
1012 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
1013 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
1013 class revlogfilestorage(object):
1014 class revlogfilestorage(object):
1014 """File storage when using revlogs."""
1015 """File storage when using revlogs."""
1015
1016
1016 def file(self, path):
1017 def file(self, path):
1017 if path[0] == b'/':
1018 if path[0] == b'/':
1018 path = path[1:]
1019 path = path[1:]
1019
1020
1020 return filelog.filelog(self.svfs, path)
1021 return filelog.filelog(self.svfs, path)
1021
1022
1022
1023
1023 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
1024 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
1024 class revlognarrowfilestorage(object):
1025 class revlognarrowfilestorage(object):
1025 """File storage when using revlogs and narrow files."""
1026 """File storage when using revlogs and narrow files."""
1026
1027
1027 def file(self, path):
1028 def file(self, path):
1028 if path[0] == b'/':
1029 if path[0] == b'/':
1029 path = path[1:]
1030 path = path[1:]
1030
1031
1031 return filelog.narrowfilelog(self.svfs, path, self._storenarrowmatch)
1032 return filelog.narrowfilelog(self.svfs, path, self._storenarrowmatch)
1032
1033
1033
1034
1034 def makefilestorage(requirements, features, **kwargs):
1035 def makefilestorage(requirements, features, **kwargs):
1035 """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
1036 """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
1036 features.add(repository.REPO_FEATURE_REVLOG_FILE_STORAGE)
1037 features.add(repository.REPO_FEATURE_REVLOG_FILE_STORAGE)
1037 features.add(repository.REPO_FEATURE_STREAM_CLONE)
1038 features.add(repository.REPO_FEATURE_STREAM_CLONE)
1038
1039
1039 if requirementsmod.NARROW_REQUIREMENT in requirements:
1040 if requirementsmod.NARROW_REQUIREMENT in requirements:
1040 return revlognarrowfilestorage
1041 return revlognarrowfilestorage
1041 else:
1042 else:
1042 return revlogfilestorage
1043 return revlogfilestorage
1043
1044
1044
1045
1045 # List of repository interfaces and factory functions for them. Each
1046 # List of repository interfaces and factory functions for them. Each
1046 # will be called in order during ``makelocalrepository()`` to iteratively
1047 # will be called in order during ``makelocalrepository()`` to iteratively
1047 # derive the final type for a local repository instance. We capture the
1048 # derive the final type for a local repository instance. We capture the
1048 # function as a lambda so we don't hold a reference and the module-level
1049 # function as a lambda so we don't hold a reference and the module-level
1049 # functions can be wrapped.
1050 # functions can be wrapped.
1050 REPO_INTERFACES = [
1051 REPO_INTERFACES = [
1051 (repository.ilocalrepositorymain, lambda: makemain),
1052 (repository.ilocalrepositorymain, lambda: makemain),
1052 (repository.ilocalrepositoryfilestorage, lambda: makefilestorage),
1053 (repository.ilocalrepositoryfilestorage, lambda: makefilestorage),
1053 ]
1054 ]
1054
1055
1055
1056
1056 @interfaceutil.implementer(repository.ilocalrepositorymain)
1057 @interfaceutil.implementer(repository.ilocalrepositorymain)
1057 class localrepository(object):
1058 class localrepository(object):
1058 """Main class for representing local repositories.
1059 """Main class for representing local repositories.
1059
1060
1060 All local repositories are instances of this class.
1061 All local repositories are instances of this class.
1061
1062
1062 Constructed on its own, instances of this class are not usable as
1063 Constructed on its own, instances of this class are not usable as
1063 repository objects. To obtain a usable repository object, call
1064 repository objects. To obtain a usable repository object, call
1064 ``hg.repository()``, ``localrepo.instance()``, or
1065 ``hg.repository()``, ``localrepo.instance()``, or
1065 ``localrepo.makelocalrepository()``. The latter is the lowest-level.
1066 ``localrepo.makelocalrepository()``. The latter is the lowest-level.
1066 ``instance()`` adds support for creating new repositories.
1067 ``instance()`` adds support for creating new repositories.
1067 ``hg.repository()`` adds more extension integration, including calling
1068 ``hg.repository()`` adds more extension integration, including calling
1068 ``reposetup()``. Generally speaking, ``hg.repository()`` should be
1069 ``reposetup()``. Generally speaking, ``hg.repository()`` should be
1069 used.
1070 used.
1070 """
1071 """
1071
1072
1072 # obsolete experimental requirements:
1073 # obsolete experimental requirements:
1073 # - manifestv2: An experimental new manifest format that allowed
1074 # - manifestv2: An experimental new manifest format that allowed
1074 # for stem compression of long paths. Experiment ended up not
1075 # for stem compression of long paths. Experiment ended up not
1075 # being successful (repository sizes went up due to worse delta
1076 # being successful (repository sizes went up due to worse delta
1076 # chains), and the code was deleted in 4.6.
1077 # chains), and the code was deleted in 4.6.
1077 supportedformats = {
1078 supportedformats = {
1078 b'revlogv1',
1079 b'revlogv1',
1079 b'generaldelta',
1080 b'generaldelta',
1080 requirementsmod.TREEMANIFEST_REQUIREMENT,
1081 requirementsmod.TREEMANIFEST_REQUIREMENT,
1081 requirementsmod.COPIESSDC_REQUIREMENT,
1082 requirementsmod.COPIESSDC_REQUIREMENT,
1082 requirementsmod.REVLOGV2_REQUIREMENT,
1083 requirementsmod.REVLOGV2_REQUIREMENT,
1083 requirementsmod.SIDEDATA_REQUIREMENT,
1084 requirementsmod.SIDEDATA_REQUIREMENT,
1084 requirementsmod.SPARSEREVLOG_REQUIREMENT,
1085 requirementsmod.SPARSEREVLOG_REQUIREMENT,
1085 requirementsmod.NODEMAP_REQUIREMENT,
1086 requirementsmod.NODEMAP_REQUIREMENT,
1086 bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT,
1087 bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT,
1087 requirementsmod.SHARESAFE_REQUIREMENT,
1088 requirementsmod.SHARESAFE_REQUIREMENT,
1088 }
1089 }
1089 _basesupported = supportedformats | {
1090 _basesupported = supportedformats | {
1090 b'store',
1091 b'store',
1091 b'fncache',
1092 b'fncache',
1092 requirementsmod.SHARED_REQUIREMENT,
1093 requirementsmod.SHARED_REQUIREMENT,
1093 requirementsmod.RELATIVE_SHARED_REQUIREMENT,
1094 requirementsmod.RELATIVE_SHARED_REQUIREMENT,
1094 b'dotencode',
1095 b'dotencode',
1095 requirementsmod.SPARSE_REQUIREMENT,
1096 requirementsmod.SPARSE_REQUIREMENT,
1096 requirementsmod.INTERNAL_PHASE_REQUIREMENT,
1097 requirementsmod.INTERNAL_PHASE_REQUIREMENT,
1097 }
1098 }
1098
1099
1099 # list of prefix for file which can be written without 'wlock'
1100 # list of prefix for file which can be written without 'wlock'
1100 # Extensions should extend this list when needed
1101 # Extensions should extend this list when needed
1101 _wlockfreeprefix = {
1102 _wlockfreeprefix = {
1102 # We migh consider requiring 'wlock' for the next
1103 # We migh consider requiring 'wlock' for the next
1103 # two, but pretty much all the existing code assume
1104 # two, but pretty much all the existing code assume
1104 # wlock is not needed so we keep them excluded for
1105 # wlock is not needed so we keep them excluded for
1105 # now.
1106 # now.
1106 b'hgrc',
1107 b'hgrc',
1107 b'requires',
1108 b'requires',
1108 # XXX cache is a complicatged business someone
1109 # XXX cache is a complicatged business someone
1109 # should investigate this in depth at some point
1110 # should investigate this in depth at some point
1110 b'cache/',
1111 b'cache/',
1111 # XXX shouldn't be dirstate covered by the wlock?
1112 # XXX shouldn't be dirstate covered by the wlock?
1112 b'dirstate',
1113 b'dirstate',
1113 # XXX bisect was still a bit too messy at the time
1114 # XXX bisect was still a bit too messy at the time
1114 # this changeset was introduced. Someone should fix
1115 # this changeset was introduced. Someone should fix
1115 # the remainig bit and drop this line
1116 # the remainig bit and drop this line
1116 b'bisect.state',
1117 b'bisect.state',
1117 }
1118 }
1118
1119
1119 def __init__(
1120 def __init__(
1120 self,
1121 self,
1121 baseui,
1122 baseui,
1122 ui,
1123 ui,
1123 origroot,
1124 origroot,
1124 wdirvfs,
1125 wdirvfs,
1125 hgvfs,
1126 hgvfs,
1126 requirements,
1127 requirements,
1127 supportedrequirements,
1128 supportedrequirements,
1128 sharedpath,
1129 sharedpath,
1129 store,
1130 store,
1130 cachevfs,
1131 cachevfs,
1131 wcachevfs,
1132 wcachevfs,
1132 features,
1133 features,
1133 intents=None,
1134 intents=None,
1134 ):
1135 ):
1135 """Create a new local repository instance.
1136 """Create a new local repository instance.
1136
1137
1137 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
1138 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
1138 or ``localrepo.makelocalrepository()`` for obtaining a new repository
1139 or ``localrepo.makelocalrepository()`` for obtaining a new repository
1139 object.
1140 object.
1140
1141
1141 Arguments:
1142 Arguments:
1142
1143
1143 baseui
1144 baseui
1144 ``ui.ui`` instance that ``ui`` argument was based off of.
1145 ``ui.ui`` instance that ``ui`` argument was based off of.
1145
1146
1146 ui
1147 ui
1147 ``ui.ui`` instance for use by the repository.
1148 ``ui.ui`` instance for use by the repository.
1148
1149
1149 origroot
1150 origroot
1150 ``bytes`` path to working directory root of this repository.
1151 ``bytes`` path to working directory root of this repository.
1151
1152
1152 wdirvfs
1153 wdirvfs
1153 ``vfs.vfs`` rooted at the working directory.
1154 ``vfs.vfs`` rooted at the working directory.
1154
1155
1155 hgvfs
1156 hgvfs
1156 ``vfs.vfs`` rooted at .hg/
1157 ``vfs.vfs`` rooted at .hg/
1157
1158
1158 requirements
1159 requirements
1159 ``set`` of bytestrings representing repository opening requirements.
1160 ``set`` of bytestrings representing repository opening requirements.
1160
1161
1161 supportedrequirements
1162 supportedrequirements
1162 ``set`` of bytestrings representing repository requirements that we
1163 ``set`` of bytestrings representing repository requirements that we
1163 know how to open. May be a supetset of ``requirements``.
1164 know how to open. May be a supetset of ``requirements``.
1164
1165
1165 sharedpath
1166 sharedpath
1166 ``bytes`` Defining path to storage base directory. Points to a
1167 ``bytes`` Defining path to storage base directory. Points to a
1167 ``.hg/`` directory somewhere.
1168 ``.hg/`` directory somewhere.
1168
1169
1169 store
1170 store
1170 ``store.basicstore`` (or derived) instance providing access to
1171 ``store.basicstore`` (or derived) instance providing access to
1171 versioned storage.
1172 versioned storage.
1172
1173
1173 cachevfs
1174 cachevfs
1174 ``vfs.vfs`` used for cache files.
1175 ``vfs.vfs`` used for cache files.
1175
1176
1176 wcachevfs
1177 wcachevfs
1177 ``vfs.vfs`` used for cache files related to the working copy.
1178 ``vfs.vfs`` used for cache files related to the working copy.
1178
1179
1179 features
1180 features
1180 ``set`` of bytestrings defining features/capabilities of this
1181 ``set`` of bytestrings defining features/capabilities of this
1181 instance.
1182 instance.
1182
1183
1183 intents
1184 intents
1184 ``set`` of system strings indicating what this repo will be used
1185 ``set`` of system strings indicating what this repo will be used
1185 for.
1186 for.
1186 """
1187 """
1187 self.baseui = baseui
1188 self.baseui = baseui
1188 self.ui = ui
1189 self.ui = ui
1189 self.origroot = origroot
1190 self.origroot = origroot
1190 # vfs rooted at working directory.
1191 # vfs rooted at working directory.
1191 self.wvfs = wdirvfs
1192 self.wvfs = wdirvfs
1192 self.root = wdirvfs.base
1193 self.root = wdirvfs.base
1193 # vfs rooted at .hg/. Used to access most non-store paths.
1194 # vfs rooted at .hg/. Used to access most non-store paths.
1194 self.vfs = hgvfs
1195 self.vfs = hgvfs
1195 self.path = hgvfs.base
1196 self.path = hgvfs.base
1196 self.requirements = requirements
1197 self.requirements = requirements
1197 self.supported = supportedrequirements
1198 self.supported = supportedrequirements
1198 self.sharedpath = sharedpath
1199 self.sharedpath = sharedpath
1199 self.store = store
1200 self.store = store
1200 self.cachevfs = cachevfs
1201 self.cachevfs = cachevfs
1201 self.wcachevfs = wcachevfs
1202 self.wcachevfs = wcachevfs
1202 self.features = features
1203 self.features = features
1203
1204
1204 self.filtername = None
1205 self.filtername = None
1205
1206
1206 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
1207 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
1207 b'devel', b'check-locks'
1208 b'devel', b'check-locks'
1208 ):
1209 ):
1209 self.vfs.audit = self._getvfsward(self.vfs.audit)
1210 self.vfs.audit = self._getvfsward(self.vfs.audit)
1210 # A list of callback to shape the phase if no data were found.
1211 # A list of callback to shape the phase if no data were found.
1211 # Callback are in the form: func(repo, roots) --> processed root.
1212 # Callback are in the form: func(repo, roots) --> processed root.
1212 # This list it to be filled by extension during repo setup
1213 # This list it to be filled by extension during repo setup
1213 self._phasedefaults = []
1214 self._phasedefaults = []
1214
1215
1215 color.setup(self.ui)
1216 color.setup(self.ui)
1216
1217
1217 self.spath = self.store.path
1218 self.spath = self.store.path
1218 self.svfs = self.store.vfs
1219 self.svfs = self.store.vfs
1219 self.sjoin = self.store.join
1220 self.sjoin = self.store.join
1220 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
1221 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
1221 b'devel', b'check-locks'
1222 b'devel', b'check-locks'
1222 ):
1223 ):
1223 if util.safehasattr(self.svfs, b'vfs'): # this is filtervfs
1224 if util.safehasattr(self.svfs, b'vfs'): # this is filtervfs
1224 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
1225 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
1225 else: # standard vfs
1226 else: # standard vfs
1226 self.svfs.audit = self._getsvfsward(self.svfs.audit)
1227 self.svfs.audit = self._getsvfsward(self.svfs.audit)
1227
1228
1228 self._dirstatevalidatewarned = False
1229 self._dirstatevalidatewarned = False
1229
1230
1230 self._branchcaches = branchmap.BranchMapCache()
1231 self._branchcaches = branchmap.BranchMapCache()
1231 self._revbranchcache = None
1232 self._revbranchcache = None
1232 self._filterpats = {}
1233 self._filterpats = {}
1233 self._datafilters = {}
1234 self._datafilters = {}
1234 self._transref = self._lockref = self._wlockref = None
1235 self._transref = self._lockref = self._wlockref = None
1235
1236
1236 # A cache for various files under .hg/ that tracks file changes,
1237 # A cache for various files under .hg/ that tracks file changes,
1237 # (used by the filecache decorator)
1238 # (used by the filecache decorator)
1238 #
1239 #
1239 # Maps a property name to its util.filecacheentry
1240 # Maps a property name to its util.filecacheentry
1240 self._filecache = {}
1241 self._filecache = {}
1241
1242
1242 # hold sets of revision to be filtered
1243 # hold sets of revision to be filtered
1243 # should be cleared when something might have changed the filter value:
1244 # should be cleared when something might have changed the filter value:
1244 # - new changesets,
1245 # - new changesets,
1245 # - phase change,
1246 # - phase change,
1246 # - new obsolescence marker,
1247 # - new obsolescence marker,
1247 # - working directory parent change,
1248 # - working directory parent change,
1248 # - bookmark changes
1249 # - bookmark changes
1249 self.filteredrevcache = {}
1250 self.filteredrevcache = {}
1250
1251
1251 # post-dirstate-status hooks
1252 # post-dirstate-status hooks
1252 self._postdsstatus = []
1253 self._postdsstatus = []
1253
1254
1254 # generic mapping between names and nodes
1255 # generic mapping between names and nodes
1255 self.names = namespaces.namespaces()
1256 self.names = namespaces.namespaces()
1256
1257
1257 # Key to signature value.
1258 # Key to signature value.
1258 self._sparsesignaturecache = {}
1259 self._sparsesignaturecache = {}
1259 # Signature to cached matcher instance.
1260 # Signature to cached matcher instance.
1260 self._sparsematchercache = {}
1261 self._sparsematchercache = {}
1261
1262
1262 self._extrafilterid = repoview.extrafilter(ui)
1263 self._extrafilterid = repoview.extrafilter(ui)
1263
1264
1264 self.filecopiesmode = None
1265 self.filecopiesmode = None
1265 if requirementsmod.COPIESSDC_REQUIREMENT in self.requirements:
1266 if requirementsmod.COPIESSDC_REQUIREMENT in self.requirements:
1266 self.filecopiesmode = b'changeset-sidedata'
1267 self.filecopiesmode = b'changeset-sidedata'
1267
1268
1268 def _getvfsward(self, origfunc):
1269 def _getvfsward(self, origfunc):
1269 """build a ward for self.vfs"""
1270 """build a ward for self.vfs"""
1270 rref = weakref.ref(self)
1271 rref = weakref.ref(self)
1271
1272
1272 def checkvfs(path, mode=None):
1273 def checkvfs(path, mode=None):
1273 ret = origfunc(path, mode=mode)
1274 ret = origfunc(path, mode=mode)
1274 repo = rref()
1275 repo = rref()
1275 if (
1276 if (
1276 repo is None
1277 repo is None
1277 or not util.safehasattr(repo, b'_wlockref')
1278 or not util.safehasattr(repo, b'_wlockref')
1278 or not util.safehasattr(repo, b'_lockref')
1279 or not util.safehasattr(repo, b'_lockref')
1279 ):
1280 ):
1280 return
1281 return
1281 if mode in (None, b'r', b'rb'):
1282 if mode in (None, b'r', b'rb'):
1282 return
1283 return
1283 if path.startswith(repo.path):
1284 if path.startswith(repo.path):
1284 # truncate name relative to the repository (.hg)
1285 # truncate name relative to the repository (.hg)
1285 path = path[len(repo.path) + 1 :]
1286 path = path[len(repo.path) + 1 :]
1286 if path.startswith(b'cache/'):
1287 if path.startswith(b'cache/'):
1287 msg = b'accessing cache with vfs instead of cachevfs: "%s"'
1288 msg = b'accessing cache with vfs instead of cachevfs: "%s"'
1288 repo.ui.develwarn(msg % path, stacklevel=3, config=b"cache-vfs")
1289 repo.ui.develwarn(msg % path, stacklevel=3, config=b"cache-vfs")
1289 # path prefixes covered by 'lock'
1290 # path prefixes covered by 'lock'
1290 vfs_path_prefixes = (
1291 vfs_path_prefixes = (
1291 b'journal.',
1292 b'journal.',
1292 b'undo.',
1293 b'undo.',
1293 b'strip-backup/',
1294 b'strip-backup/',
1294 b'cache/',
1295 b'cache/',
1295 )
1296 )
1296 if any(path.startswith(prefix) for prefix in vfs_path_prefixes):
1297 if any(path.startswith(prefix) for prefix in vfs_path_prefixes):
1297 if repo._currentlock(repo._lockref) is None:
1298 if repo._currentlock(repo._lockref) is None:
1298 repo.ui.develwarn(
1299 repo.ui.develwarn(
1299 b'write with no lock: "%s"' % path,
1300 b'write with no lock: "%s"' % path,
1300 stacklevel=3,
1301 stacklevel=3,
1301 config=b'check-locks',
1302 config=b'check-locks',
1302 )
1303 )
1303 elif repo._currentlock(repo._wlockref) is None:
1304 elif repo._currentlock(repo._wlockref) is None:
1304 # rest of vfs files are covered by 'wlock'
1305 # rest of vfs files are covered by 'wlock'
1305 #
1306 #
1306 # exclude special files
1307 # exclude special files
1307 for prefix in self._wlockfreeprefix:
1308 for prefix in self._wlockfreeprefix:
1308 if path.startswith(prefix):
1309 if path.startswith(prefix):
1309 return
1310 return
1310 repo.ui.develwarn(
1311 repo.ui.develwarn(
1311 b'write with no wlock: "%s"' % path,
1312 b'write with no wlock: "%s"' % path,
1312 stacklevel=3,
1313 stacklevel=3,
1313 config=b'check-locks',
1314 config=b'check-locks',
1314 )
1315 )
1315 return ret
1316 return ret
1316
1317
1317 return checkvfs
1318 return checkvfs
1318
1319
1319 def _getsvfsward(self, origfunc):
1320 def _getsvfsward(self, origfunc):
1320 """build a ward for self.svfs"""
1321 """build a ward for self.svfs"""
1321 rref = weakref.ref(self)
1322 rref = weakref.ref(self)
1322
1323
1323 def checksvfs(path, mode=None):
1324 def checksvfs(path, mode=None):
1324 ret = origfunc(path, mode=mode)
1325 ret = origfunc(path, mode=mode)
1325 repo = rref()
1326 repo = rref()
1326 if repo is None or not util.safehasattr(repo, b'_lockref'):
1327 if repo is None or not util.safehasattr(repo, b'_lockref'):
1327 return
1328 return
1328 if mode in (None, b'r', b'rb'):
1329 if mode in (None, b'r', b'rb'):
1329 return
1330 return
1330 if path.startswith(repo.sharedpath):
1331 if path.startswith(repo.sharedpath):
1331 # truncate name relative to the repository (.hg)
1332 # truncate name relative to the repository (.hg)
1332 path = path[len(repo.sharedpath) + 1 :]
1333 path = path[len(repo.sharedpath) + 1 :]
1333 if repo._currentlock(repo._lockref) is None:
1334 if repo._currentlock(repo._lockref) is None:
1334 repo.ui.develwarn(
1335 repo.ui.develwarn(
1335 b'write with no lock: "%s"' % path, stacklevel=4
1336 b'write with no lock: "%s"' % path, stacklevel=4
1336 )
1337 )
1337 return ret
1338 return ret
1338
1339
1339 return checksvfs
1340 return checksvfs
1340
1341
1341 def close(self):
1342 def close(self):
1342 self._writecaches()
1343 self._writecaches()
1343
1344
1344 def _writecaches(self):
1345 def _writecaches(self):
1345 if self._revbranchcache:
1346 if self._revbranchcache:
1346 self._revbranchcache.write()
1347 self._revbranchcache.write()
1347
1348
1348 def _restrictcapabilities(self, caps):
1349 def _restrictcapabilities(self, caps):
1349 if self.ui.configbool(b'experimental', b'bundle2-advertise'):
1350 if self.ui.configbool(b'experimental', b'bundle2-advertise'):
1350 caps = set(caps)
1351 caps = set(caps)
1351 capsblob = bundle2.encodecaps(
1352 capsblob = bundle2.encodecaps(
1352 bundle2.getrepocaps(self, role=b'client')
1353 bundle2.getrepocaps(self, role=b'client')
1353 )
1354 )
1354 caps.add(b'bundle2=' + urlreq.quote(capsblob))
1355 caps.add(b'bundle2=' + urlreq.quote(capsblob))
1355 return caps
1356 return caps
1356
1357
1357 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
1358 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
1358 # self -> auditor -> self._checknested -> self
1359 # self -> auditor -> self._checknested -> self
1359
1360
1360 @property
1361 @property
1361 def auditor(self):
1362 def auditor(self):
1362 # This is only used by context.workingctx.match in order to
1363 # This is only used by context.workingctx.match in order to
1363 # detect files in subrepos.
1364 # detect files in subrepos.
1364 return pathutil.pathauditor(self.root, callback=self._checknested)
1365 return pathutil.pathauditor(self.root, callback=self._checknested)
1365
1366
1366 @property
1367 @property
1367 def nofsauditor(self):
1368 def nofsauditor(self):
1368 # This is only used by context.basectx.match in order to detect
1369 # This is only used by context.basectx.match in order to detect
1369 # files in subrepos.
1370 # files in subrepos.
1370 return pathutil.pathauditor(
1371 return pathutil.pathauditor(
1371 self.root, callback=self._checknested, realfs=False, cached=True
1372 self.root, callback=self._checknested, realfs=False, cached=True
1372 )
1373 )
1373
1374
1374 def _checknested(self, path):
1375 def _checknested(self, path):
1375 """Determine if path is a legal nested repository."""
1376 """Determine if path is a legal nested repository."""
1376 if not path.startswith(self.root):
1377 if not path.startswith(self.root):
1377 return False
1378 return False
1378 subpath = path[len(self.root) + 1 :]
1379 subpath = path[len(self.root) + 1 :]
1379 normsubpath = util.pconvert(subpath)
1380 normsubpath = util.pconvert(subpath)
1380
1381
1381 # XXX: Checking against the current working copy is wrong in
1382 # XXX: Checking against the current working copy is wrong in
1382 # the sense that it can reject things like
1383 # the sense that it can reject things like
1383 #
1384 #
1384 # $ hg cat -r 10 sub/x.txt
1385 # $ hg cat -r 10 sub/x.txt
1385 #
1386 #
1386 # if sub/ is no longer a subrepository in the working copy
1387 # if sub/ is no longer a subrepository in the working copy
1387 # parent revision.
1388 # parent revision.
1388 #
1389 #
1389 # However, it can of course also allow things that would have
1390 # However, it can of course also allow things that would have
1390 # been rejected before, such as the above cat command if sub/
1391 # been rejected before, such as the above cat command if sub/
1391 # is a subrepository now, but was a normal directory before.
1392 # is a subrepository now, but was a normal directory before.
1392 # The old path auditor would have rejected by mistake since it
1393 # The old path auditor would have rejected by mistake since it
1393 # panics when it sees sub/.hg/.
1394 # panics when it sees sub/.hg/.
1394 #
1395 #
1395 # All in all, checking against the working copy seems sensible
1396 # All in all, checking against the working copy seems sensible
1396 # since we want to prevent access to nested repositories on
1397 # since we want to prevent access to nested repositories on
1397 # the filesystem *now*.
1398 # the filesystem *now*.
1398 ctx = self[None]
1399 ctx = self[None]
1399 parts = util.splitpath(subpath)
1400 parts = util.splitpath(subpath)
1400 while parts:
1401 while parts:
1401 prefix = b'/'.join(parts)
1402 prefix = b'/'.join(parts)
1402 if prefix in ctx.substate:
1403 if prefix in ctx.substate:
1403 if prefix == normsubpath:
1404 if prefix == normsubpath:
1404 return True
1405 return True
1405 else:
1406 else:
1406 sub = ctx.sub(prefix)
1407 sub = ctx.sub(prefix)
1407 return sub.checknested(subpath[len(prefix) + 1 :])
1408 return sub.checknested(subpath[len(prefix) + 1 :])
1408 else:
1409 else:
1409 parts.pop()
1410 parts.pop()
1410 return False
1411 return False
1411
1412
1412 def peer(self):
1413 def peer(self):
1413 return localpeer(self) # not cached to avoid reference cycle
1414 return localpeer(self) # not cached to avoid reference cycle
1414
1415
1415 def unfiltered(self):
1416 def unfiltered(self):
1416 """Return unfiltered version of the repository
1417 """Return unfiltered version of the repository
1417
1418
1418 Intended to be overwritten by filtered repo."""
1419 Intended to be overwritten by filtered repo."""
1419 return self
1420 return self
1420
1421
1421 def filtered(self, name, visibilityexceptions=None):
1422 def filtered(self, name, visibilityexceptions=None):
1422 """Return a filtered version of a repository
1423 """Return a filtered version of a repository
1423
1424
1424 The `name` parameter is the identifier of the requested view. This
1425 The `name` parameter is the identifier of the requested view. This
1425 will return a repoview object set "exactly" to the specified view.
1426 will return a repoview object set "exactly" to the specified view.
1426
1427
1427 This function does not apply recursive filtering to a repository. For
1428 This function does not apply recursive filtering to a repository. For
1428 example calling `repo.filtered("served")` will return a repoview using
1429 example calling `repo.filtered("served")` will return a repoview using
1429 the "served" view, regardless of the initial view used by `repo`.
1430 the "served" view, regardless of the initial view used by `repo`.
1430
1431
1431 In other word, there is always only one level of `repoview` "filtering".
1432 In other word, there is always only one level of `repoview` "filtering".
1432 """
1433 """
1433 if self._extrafilterid is not None and b'%' not in name:
1434 if self._extrafilterid is not None and b'%' not in name:
1434 name = name + b'%' + self._extrafilterid
1435 name = name + b'%' + self._extrafilterid
1435
1436
1436 cls = repoview.newtype(self.unfiltered().__class__)
1437 cls = repoview.newtype(self.unfiltered().__class__)
1437 return cls(self, name, visibilityexceptions)
1438 return cls(self, name, visibilityexceptions)
1438
1439
1439 @mixedrepostorecache(
1440 @mixedrepostorecache(
1440 (b'bookmarks', b'plain'),
1441 (b'bookmarks', b'plain'),
1441 (b'bookmarks.current', b'plain'),
1442 (b'bookmarks.current', b'plain'),
1442 (b'bookmarks', b''),
1443 (b'bookmarks', b''),
1443 (b'00changelog.i', b''),
1444 (b'00changelog.i', b''),
1444 )
1445 )
1445 def _bookmarks(self):
1446 def _bookmarks(self):
1446 # Since the multiple files involved in the transaction cannot be
1447 # Since the multiple files involved in the transaction cannot be
1447 # written atomically (with current repository format), there is a race
1448 # written atomically (with current repository format), there is a race
1448 # condition here.
1449 # condition here.
1449 #
1450 #
1450 # 1) changelog content A is read
1451 # 1) changelog content A is read
1451 # 2) outside transaction update changelog to content B
1452 # 2) outside transaction update changelog to content B
1452 # 3) outside transaction update bookmark file referring to content B
1453 # 3) outside transaction update bookmark file referring to content B
1453 # 4) bookmarks file content is read and filtered against changelog-A
1454 # 4) bookmarks file content is read and filtered against changelog-A
1454 #
1455 #
1455 # When this happens, bookmarks against nodes missing from A are dropped.
1456 # When this happens, bookmarks against nodes missing from A are dropped.
1456 #
1457 #
1457 # Having this happening during read is not great, but it become worse
1458 # Having this happening during read is not great, but it become worse
1458 # when this happen during write because the bookmarks to the "unknown"
1459 # when this happen during write because the bookmarks to the "unknown"
1459 # nodes will be dropped for good. However, writes happen within locks.
1460 # nodes will be dropped for good. However, writes happen within locks.
1460 # This locking makes it possible to have a race free consistent read.
1461 # This locking makes it possible to have a race free consistent read.
1461 # For this purpose data read from disc before locking are
1462 # For this purpose data read from disc before locking are
1462 # "invalidated" right after the locks are taken. This invalidations are
1463 # "invalidated" right after the locks are taken. This invalidations are
1463 # "light", the `filecache` mechanism keep the data in memory and will
1464 # "light", the `filecache` mechanism keep the data in memory and will
1464 # reuse them if the underlying files did not changed. Not parsing the
1465 # reuse them if the underlying files did not changed. Not parsing the
1465 # same data multiple times helps performances.
1466 # same data multiple times helps performances.
1466 #
1467 #
1467 # Unfortunately in the case describe above, the files tracked by the
1468 # Unfortunately in the case describe above, the files tracked by the
1468 # bookmarks file cache might not have changed, but the in-memory
1469 # bookmarks file cache might not have changed, but the in-memory
1469 # content is still "wrong" because we used an older changelog content
1470 # content is still "wrong" because we used an older changelog content
1470 # to process the on-disk data. So after locking, the changelog would be
1471 # to process the on-disk data. So after locking, the changelog would be
1471 # refreshed but `_bookmarks` would be preserved.
1472 # refreshed but `_bookmarks` would be preserved.
1472 # Adding `00changelog.i` to the list of tracked file is not
1473 # Adding `00changelog.i` to the list of tracked file is not
1473 # enough, because at the time we build the content for `_bookmarks` in
1474 # enough, because at the time we build the content for `_bookmarks` in
1474 # (4), the changelog file has already diverged from the content used
1475 # (4), the changelog file has already diverged from the content used
1475 # for loading `changelog` in (1)
1476 # for loading `changelog` in (1)
1476 #
1477 #
1477 # To prevent the issue, we force the changelog to be explicitly
1478 # To prevent the issue, we force the changelog to be explicitly
1478 # reloaded while computing `_bookmarks`. The data race can still happen
1479 # reloaded while computing `_bookmarks`. The data race can still happen
1479 # without the lock (with a narrower window), but it would no longer go
1480 # without the lock (with a narrower window), but it would no longer go
1480 # undetected during the lock time refresh.
1481 # undetected during the lock time refresh.
1481 #
1482 #
1482 # The new schedule is as follow
1483 # The new schedule is as follow
1483 #
1484 #
1484 # 1) filecache logic detect that `_bookmarks` needs to be computed
1485 # 1) filecache logic detect that `_bookmarks` needs to be computed
1485 # 2) cachestat for `bookmarks` and `changelog` are captured (for book)
1486 # 2) cachestat for `bookmarks` and `changelog` are captured (for book)
1486 # 3) We force `changelog` filecache to be tested
1487 # 3) We force `changelog` filecache to be tested
1487 # 4) cachestat for `changelog` are captured (for changelog)
1488 # 4) cachestat for `changelog` are captured (for changelog)
1488 # 5) `_bookmarks` is computed and cached
1489 # 5) `_bookmarks` is computed and cached
1489 #
1490 #
1490 # The step in (3) ensure we have a changelog at least as recent as the
1491 # The step in (3) ensure we have a changelog at least as recent as the
1491 # cache stat computed in (1). As a result at locking time:
1492 # cache stat computed in (1). As a result at locking time:
1492 # * if the changelog did not changed since (1) -> we can reuse the data
1493 # * if the changelog did not changed since (1) -> we can reuse the data
1493 # * otherwise -> the bookmarks get refreshed.
1494 # * otherwise -> the bookmarks get refreshed.
1494 self._refreshchangelog()
1495 self._refreshchangelog()
1495 return bookmarks.bmstore(self)
1496 return bookmarks.bmstore(self)
1496
1497
1497 def _refreshchangelog(self):
1498 def _refreshchangelog(self):
1498 """make sure the in memory changelog match the on-disk one"""
1499 """make sure the in memory changelog match the on-disk one"""
1499 if 'changelog' in vars(self) and self.currenttransaction() is None:
1500 if 'changelog' in vars(self) and self.currenttransaction() is None:
1500 del self.changelog
1501 del self.changelog
1501
1502
1502 @property
1503 @property
1503 def _activebookmark(self):
1504 def _activebookmark(self):
1504 return self._bookmarks.active
1505 return self._bookmarks.active
1505
1506
1506 # _phasesets depend on changelog. what we need is to call
1507 # _phasesets depend on changelog. what we need is to call
1507 # _phasecache.invalidate() if '00changelog.i' was changed, but it
1508 # _phasecache.invalidate() if '00changelog.i' was changed, but it
1508 # can't be easily expressed in filecache mechanism.
1509 # can't be easily expressed in filecache mechanism.
1509 @storecache(b'phaseroots', b'00changelog.i')
1510 @storecache(b'phaseroots', b'00changelog.i')
1510 def _phasecache(self):
1511 def _phasecache(self):
1511 return phases.phasecache(self, self._phasedefaults)
1512 return phases.phasecache(self, self._phasedefaults)
1512
1513
1513 @storecache(b'obsstore')
1514 @storecache(b'obsstore')
1514 def obsstore(self):
1515 def obsstore(self):
1515 return obsolete.makestore(self.ui, self)
1516 return obsolete.makestore(self.ui, self)
1516
1517
1517 @storecache(b'00changelog.i')
1518 @storecache(b'00changelog.i')
1518 def changelog(self):
1519 def changelog(self):
1519 # load dirstate before changelog to avoid race see issue6303
1520 # load dirstate before changelog to avoid race see issue6303
1520 self.dirstate.prefetch_parents()
1521 self.dirstate.prefetch_parents()
1521 return self.store.changelog(txnutil.mayhavepending(self.root))
1522 return self.store.changelog(txnutil.mayhavepending(self.root))
1522
1523
1523 @storecache(b'00manifest.i')
1524 @storecache(b'00manifest.i')
1524 def manifestlog(self):
1525 def manifestlog(self):
1525 return self.store.manifestlog(self, self._storenarrowmatch)
1526 return self.store.manifestlog(self, self._storenarrowmatch)
1526
1527
1527 @repofilecache(b'dirstate')
1528 @repofilecache(b'dirstate')
1528 def dirstate(self):
1529 def dirstate(self):
1529 return self._makedirstate()
1530 return self._makedirstate()
1530
1531
1531 def _makedirstate(self):
1532 def _makedirstate(self):
1532 """Extension point for wrapping the dirstate per-repo."""
1533 """Extension point for wrapping the dirstate per-repo."""
1533 sparsematchfn = lambda: sparse.matcher(self)
1534 sparsematchfn = lambda: sparse.matcher(self)
1534
1535
1535 return dirstate.dirstate(
1536 return dirstate.dirstate(
1536 self.vfs, self.ui, self.root, self._dirstatevalidate, sparsematchfn
1537 self.vfs, self.ui, self.root, self._dirstatevalidate, sparsematchfn
1537 )
1538 )
1538
1539
1539 def _dirstatevalidate(self, node):
1540 def _dirstatevalidate(self, node):
1540 try:
1541 try:
1541 self.changelog.rev(node)
1542 self.changelog.rev(node)
1542 return node
1543 return node
1543 except error.LookupError:
1544 except error.LookupError:
1544 if not self._dirstatevalidatewarned:
1545 if not self._dirstatevalidatewarned:
1545 self._dirstatevalidatewarned = True
1546 self._dirstatevalidatewarned = True
1546 self.ui.warn(
1547 self.ui.warn(
1547 _(b"warning: ignoring unknown working parent %s!\n")
1548 _(b"warning: ignoring unknown working parent %s!\n")
1548 % short(node)
1549 % short(node)
1549 )
1550 )
1550 return nullid
1551 return nullid
1551
1552
1552 @storecache(narrowspec.FILENAME)
1553 @storecache(narrowspec.FILENAME)
1553 def narrowpats(self):
1554 def narrowpats(self):
1554 """matcher patterns for this repository's narrowspec
1555 """matcher patterns for this repository's narrowspec
1555
1556
1556 A tuple of (includes, excludes).
1557 A tuple of (includes, excludes).
1557 """
1558 """
1558 return narrowspec.load(self)
1559 return narrowspec.load(self)
1559
1560
1560 @storecache(narrowspec.FILENAME)
1561 @storecache(narrowspec.FILENAME)
1561 def _storenarrowmatch(self):
1562 def _storenarrowmatch(self):
1562 if requirementsmod.NARROW_REQUIREMENT not in self.requirements:
1563 if requirementsmod.NARROW_REQUIREMENT not in self.requirements:
1563 return matchmod.always()
1564 return matchmod.always()
1564 include, exclude = self.narrowpats
1565 include, exclude = self.narrowpats
1565 return narrowspec.match(self.root, include=include, exclude=exclude)
1566 return narrowspec.match(self.root, include=include, exclude=exclude)
1566
1567
1567 @storecache(narrowspec.FILENAME)
1568 @storecache(narrowspec.FILENAME)
1568 def _narrowmatch(self):
1569 def _narrowmatch(self):
1569 if requirementsmod.NARROW_REQUIREMENT not in self.requirements:
1570 if requirementsmod.NARROW_REQUIREMENT not in self.requirements:
1570 return matchmod.always()
1571 return matchmod.always()
1571 narrowspec.checkworkingcopynarrowspec(self)
1572 narrowspec.checkworkingcopynarrowspec(self)
1572 include, exclude = self.narrowpats
1573 include, exclude = self.narrowpats
1573 return narrowspec.match(self.root, include=include, exclude=exclude)
1574 return narrowspec.match(self.root, include=include, exclude=exclude)
1574
1575
1575 def narrowmatch(self, match=None, includeexact=False):
1576 def narrowmatch(self, match=None, includeexact=False):
1576 """matcher corresponding the the repo's narrowspec
1577 """matcher corresponding the the repo's narrowspec
1577
1578
1578 If `match` is given, then that will be intersected with the narrow
1579 If `match` is given, then that will be intersected with the narrow
1579 matcher.
1580 matcher.
1580
1581
1581 If `includeexact` is True, then any exact matches from `match` will
1582 If `includeexact` is True, then any exact matches from `match` will
1582 be included even if they're outside the narrowspec.
1583 be included even if they're outside the narrowspec.
1583 """
1584 """
1584 if match:
1585 if match:
1585 if includeexact and not self._narrowmatch.always():
1586 if includeexact and not self._narrowmatch.always():
1586 # do not exclude explicitly-specified paths so that they can
1587 # do not exclude explicitly-specified paths so that they can
1587 # be warned later on
1588 # be warned later on
1588 em = matchmod.exact(match.files())
1589 em = matchmod.exact(match.files())
1589 nm = matchmod.unionmatcher([self._narrowmatch, em])
1590 nm = matchmod.unionmatcher([self._narrowmatch, em])
1590 return matchmod.intersectmatchers(match, nm)
1591 return matchmod.intersectmatchers(match, nm)
1591 return matchmod.intersectmatchers(match, self._narrowmatch)
1592 return matchmod.intersectmatchers(match, self._narrowmatch)
1592 return self._narrowmatch
1593 return self._narrowmatch
1593
1594
1594 def setnarrowpats(self, newincludes, newexcludes):
1595 def setnarrowpats(self, newincludes, newexcludes):
1595 narrowspec.save(self, newincludes, newexcludes)
1596 narrowspec.save(self, newincludes, newexcludes)
1596 self.invalidate(clearfilecache=True)
1597 self.invalidate(clearfilecache=True)
1597
1598
1598 @unfilteredpropertycache
1599 @unfilteredpropertycache
1599 def _quick_access_changeid_null(self):
1600 def _quick_access_changeid_null(self):
1600 return {
1601 return {
1601 b'null': (nullrev, nullid),
1602 b'null': (nullrev, nullid),
1602 nullrev: (nullrev, nullid),
1603 nullrev: (nullrev, nullid),
1603 nullid: (nullrev, nullid),
1604 nullid: (nullrev, nullid),
1604 }
1605 }
1605
1606
1606 @unfilteredpropertycache
1607 @unfilteredpropertycache
1607 def _quick_access_changeid_wc(self):
1608 def _quick_access_changeid_wc(self):
1608 # also fast path access to the working copy parents
1609 # also fast path access to the working copy parents
1609 # however, only do it for filter that ensure wc is visible.
1610 # however, only do it for filter that ensure wc is visible.
1610 quick = self._quick_access_changeid_null.copy()
1611 quick = self._quick_access_changeid_null.copy()
1611 cl = self.unfiltered().changelog
1612 cl = self.unfiltered().changelog
1612 for node in self.dirstate.parents():
1613 for node in self.dirstate.parents():
1613 if node == nullid:
1614 if node == nullid:
1614 continue
1615 continue
1615 rev = cl.index.get_rev(node)
1616 rev = cl.index.get_rev(node)
1616 if rev is None:
1617 if rev is None:
1617 # unknown working copy parent case:
1618 # unknown working copy parent case:
1618 #
1619 #
1619 # skip the fast path and let higher code deal with it
1620 # skip the fast path and let higher code deal with it
1620 continue
1621 continue
1621 pair = (rev, node)
1622 pair = (rev, node)
1622 quick[rev] = pair
1623 quick[rev] = pair
1623 quick[node] = pair
1624 quick[node] = pair
1624 # also add the parents of the parents
1625 # also add the parents of the parents
1625 for r in cl.parentrevs(rev):
1626 for r in cl.parentrevs(rev):
1626 if r == nullrev:
1627 if r == nullrev:
1627 continue
1628 continue
1628 n = cl.node(r)
1629 n = cl.node(r)
1629 pair = (r, n)
1630 pair = (r, n)
1630 quick[r] = pair
1631 quick[r] = pair
1631 quick[n] = pair
1632 quick[n] = pair
1632 p1node = self.dirstate.p1()
1633 p1node = self.dirstate.p1()
1633 if p1node != nullid:
1634 if p1node != nullid:
1634 quick[b'.'] = quick[p1node]
1635 quick[b'.'] = quick[p1node]
1635 return quick
1636 return quick
1636
1637
1637 @unfilteredmethod
1638 @unfilteredmethod
1638 def _quick_access_changeid_invalidate(self):
1639 def _quick_access_changeid_invalidate(self):
1639 if '_quick_access_changeid_wc' in vars(self):
1640 if '_quick_access_changeid_wc' in vars(self):
1640 del self.__dict__['_quick_access_changeid_wc']
1641 del self.__dict__['_quick_access_changeid_wc']
1641
1642
1642 @property
1643 @property
1643 def _quick_access_changeid(self):
1644 def _quick_access_changeid(self):
1644 """an helper dictionnary for __getitem__ calls
1645 """an helper dictionnary for __getitem__ calls
1645
1646
1646 This contains a list of symbol we can recognise right away without
1647 This contains a list of symbol we can recognise right away without
1647 further processing.
1648 further processing.
1648 """
1649 """
1649 if self.filtername in repoview.filter_has_wc:
1650 if self.filtername in repoview.filter_has_wc:
1650 return self._quick_access_changeid_wc
1651 return self._quick_access_changeid_wc
1651 return self._quick_access_changeid_null
1652 return self._quick_access_changeid_null
1652
1653
1653 def __getitem__(self, changeid):
1654 def __getitem__(self, changeid):
1654 # dealing with special cases
1655 # dealing with special cases
1655 if changeid is None:
1656 if changeid is None:
1656 return context.workingctx(self)
1657 return context.workingctx(self)
1657 if isinstance(changeid, context.basectx):
1658 if isinstance(changeid, context.basectx):
1658 return changeid
1659 return changeid
1659
1660
1660 # dealing with multiple revisions
1661 # dealing with multiple revisions
1661 if isinstance(changeid, slice):
1662 if isinstance(changeid, slice):
1662 # wdirrev isn't contiguous so the slice shouldn't include it
1663 # wdirrev isn't contiguous so the slice shouldn't include it
1663 return [
1664 return [
1664 self[i]
1665 self[i]
1665 for i in pycompat.xrange(*changeid.indices(len(self)))
1666 for i in pycompat.xrange(*changeid.indices(len(self)))
1666 if i not in self.changelog.filteredrevs
1667 if i not in self.changelog.filteredrevs
1667 ]
1668 ]
1668
1669
1669 # dealing with some special values
1670 # dealing with some special values
1670 quick_access = self._quick_access_changeid.get(changeid)
1671 quick_access = self._quick_access_changeid.get(changeid)
1671 if quick_access is not None:
1672 if quick_access is not None:
1672 rev, node = quick_access
1673 rev, node = quick_access
1673 return context.changectx(self, rev, node, maybe_filtered=False)
1674 return context.changectx(self, rev, node, maybe_filtered=False)
1674 if changeid == b'tip':
1675 if changeid == b'tip':
1675 node = self.changelog.tip()
1676 node = self.changelog.tip()
1676 rev = self.changelog.rev(node)
1677 rev = self.changelog.rev(node)
1677 return context.changectx(self, rev, node)
1678 return context.changectx(self, rev, node)
1678
1679
1679 # dealing with arbitrary values
1680 # dealing with arbitrary values
1680 try:
1681 try:
1681 if isinstance(changeid, int):
1682 if isinstance(changeid, int):
1682 node = self.changelog.node(changeid)
1683 node = self.changelog.node(changeid)
1683 rev = changeid
1684 rev = changeid
1684 elif changeid == b'.':
1685 elif changeid == b'.':
1685 # this is a hack to delay/avoid loading obsmarkers
1686 # this is a hack to delay/avoid loading obsmarkers
1686 # when we know that '.' won't be hidden
1687 # when we know that '.' won't be hidden
1687 node = self.dirstate.p1()
1688 node = self.dirstate.p1()
1688 rev = self.unfiltered().changelog.rev(node)
1689 rev = self.unfiltered().changelog.rev(node)
1689 elif len(changeid) == 20:
1690 elif len(changeid) == 20:
1690 try:
1691 try:
1691 node = changeid
1692 node = changeid
1692 rev = self.changelog.rev(changeid)
1693 rev = self.changelog.rev(changeid)
1693 except error.FilteredLookupError:
1694 except error.FilteredLookupError:
1694 changeid = hex(changeid) # for the error message
1695 changeid = hex(changeid) # for the error message
1695 raise
1696 raise
1696 except LookupError:
1697 except LookupError:
1697 # check if it might have come from damaged dirstate
1698 # check if it might have come from damaged dirstate
1698 #
1699 #
1699 # XXX we could avoid the unfiltered if we had a recognizable
1700 # XXX we could avoid the unfiltered if we had a recognizable
1700 # exception for filtered changeset access
1701 # exception for filtered changeset access
1701 if (
1702 if (
1702 self.local()
1703 self.local()
1703 and changeid in self.unfiltered().dirstate.parents()
1704 and changeid in self.unfiltered().dirstate.parents()
1704 ):
1705 ):
1705 msg = _(b"working directory has unknown parent '%s'!")
1706 msg = _(b"working directory has unknown parent '%s'!")
1706 raise error.Abort(msg % short(changeid))
1707 raise error.Abort(msg % short(changeid))
1707 changeid = hex(changeid) # for the error message
1708 changeid = hex(changeid) # for the error message
1708 raise
1709 raise
1709
1710
1710 elif len(changeid) == 40:
1711 elif len(changeid) == 40:
1711 node = bin(changeid)
1712 node = bin(changeid)
1712 rev = self.changelog.rev(node)
1713 rev = self.changelog.rev(node)
1713 else:
1714 else:
1714 raise error.ProgrammingError(
1715 raise error.ProgrammingError(
1715 b"unsupported changeid '%s' of type %s"
1716 b"unsupported changeid '%s' of type %s"
1716 % (changeid, pycompat.bytestr(type(changeid)))
1717 % (changeid, pycompat.bytestr(type(changeid)))
1717 )
1718 )
1718
1719
1719 return context.changectx(self, rev, node)
1720 return context.changectx(self, rev, node)
1720
1721
1721 except (error.FilteredIndexError, error.FilteredLookupError):
1722 except (error.FilteredIndexError, error.FilteredLookupError):
1722 raise error.FilteredRepoLookupError(
1723 raise error.FilteredRepoLookupError(
1723 _(b"filtered revision '%s'") % pycompat.bytestr(changeid)
1724 _(b"filtered revision '%s'") % pycompat.bytestr(changeid)
1724 )
1725 )
1725 except (IndexError, LookupError):
1726 except (IndexError, LookupError):
1726 raise error.RepoLookupError(
1727 raise error.RepoLookupError(
1727 _(b"unknown revision '%s'") % pycompat.bytestr(changeid)
1728 _(b"unknown revision '%s'") % pycompat.bytestr(changeid)
1728 )
1729 )
1729 except error.WdirUnsupported:
1730 except error.WdirUnsupported:
1730 return context.workingctx(self)
1731 return context.workingctx(self)
1731
1732
1732 def __contains__(self, changeid):
1733 def __contains__(self, changeid):
1733 """True if the given changeid exists
1734 """True if the given changeid exists
1734
1735
1735 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1736 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1736 specified.
1737 specified.
1737 """
1738 """
1738 try:
1739 try:
1739 self[changeid]
1740 self[changeid]
1740 return True
1741 return True
1741 except error.RepoLookupError:
1742 except error.RepoLookupError:
1742 return False
1743 return False
1743
1744
1744 def __nonzero__(self):
1745 def __nonzero__(self):
1745 return True
1746 return True
1746
1747
1747 __bool__ = __nonzero__
1748 __bool__ = __nonzero__
1748
1749
1749 def __len__(self):
1750 def __len__(self):
1750 # no need to pay the cost of repoview.changelog
1751 # no need to pay the cost of repoview.changelog
1751 unfi = self.unfiltered()
1752 unfi = self.unfiltered()
1752 return len(unfi.changelog)
1753 return len(unfi.changelog)
1753
1754
1754 def __iter__(self):
1755 def __iter__(self):
1755 return iter(self.changelog)
1756 return iter(self.changelog)
1756
1757
1757 def revs(self, expr, *args):
1758 def revs(self, expr, *args):
1758 '''Find revisions matching a revset.
1759 '''Find revisions matching a revset.
1759
1760
1760 The revset is specified as a string ``expr`` that may contain
1761 The revset is specified as a string ``expr`` that may contain
1761 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1762 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1762
1763
1763 Revset aliases from the configuration are not expanded. To expand
1764 Revset aliases from the configuration are not expanded. To expand
1764 user aliases, consider calling ``scmutil.revrange()`` or
1765 user aliases, consider calling ``scmutil.revrange()`` or
1765 ``repo.anyrevs([expr], user=True)``.
1766 ``repo.anyrevs([expr], user=True)``.
1766
1767
1767 Returns a smartset.abstractsmartset, which is a list-like interface
1768 Returns a smartset.abstractsmartset, which is a list-like interface
1768 that contains integer revisions.
1769 that contains integer revisions.
1769 '''
1770 '''
1770 tree = revsetlang.spectree(expr, *args)
1771 tree = revsetlang.spectree(expr, *args)
1771 return revset.makematcher(tree)(self)
1772 return revset.makematcher(tree)(self)
1772
1773
1773 def set(self, expr, *args):
1774 def set(self, expr, *args):
1774 '''Find revisions matching a revset and emit changectx instances.
1775 '''Find revisions matching a revset and emit changectx instances.
1775
1776
1776 This is a convenience wrapper around ``revs()`` that iterates the
1777 This is a convenience wrapper around ``revs()`` that iterates the
1777 result and is a generator of changectx instances.
1778 result and is a generator of changectx instances.
1778
1779
1779 Revset aliases from the configuration are not expanded. To expand
1780 Revset aliases from the configuration are not expanded. To expand
1780 user aliases, consider calling ``scmutil.revrange()``.
1781 user aliases, consider calling ``scmutil.revrange()``.
1781 '''
1782 '''
1782 for r in self.revs(expr, *args):
1783 for r in self.revs(expr, *args):
1783 yield self[r]
1784 yield self[r]
1784
1785
1785 def anyrevs(self, specs, user=False, localalias=None):
1786 def anyrevs(self, specs, user=False, localalias=None):
1786 '''Find revisions matching one of the given revsets.
1787 '''Find revisions matching one of the given revsets.
1787
1788
1788 Revset aliases from the configuration are not expanded by default. To
1789 Revset aliases from the configuration are not expanded by default. To
1789 expand user aliases, specify ``user=True``. To provide some local
1790 expand user aliases, specify ``user=True``. To provide some local
1790 definitions overriding user aliases, set ``localalias`` to
1791 definitions overriding user aliases, set ``localalias`` to
1791 ``{name: definitionstring}``.
1792 ``{name: definitionstring}``.
1792 '''
1793 '''
1793 if specs == [b'null']:
1794 if specs == [b'null']:
1794 return revset.baseset([nullrev])
1795 return revset.baseset([nullrev])
1795 if specs == [b'.']:
1796 if specs == [b'.']:
1796 quick_data = self._quick_access_changeid.get(b'.')
1797 quick_data = self._quick_access_changeid.get(b'.')
1797 if quick_data is not None:
1798 if quick_data is not None:
1798 return revset.baseset([quick_data[0]])
1799 return revset.baseset([quick_data[0]])
1799 if user:
1800 if user:
1800 m = revset.matchany(
1801 m = revset.matchany(
1801 self.ui,
1802 self.ui,
1802 specs,
1803 specs,
1803 lookup=revset.lookupfn(self),
1804 lookup=revset.lookupfn(self),
1804 localalias=localalias,
1805 localalias=localalias,
1805 )
1806 )
1806 else:
1807 else:
1807 m = revset.matchany(None, specs, localalias=localalias)
1808 m = revset.matchany(None, specs, localalias=localalias)
1808 return m(self)
1809 return m(self)
1809
1810
1810 def url(self):
1811 def url(self):
1811 return b'file:' + self.root
1812 return b'file:' + self.root
1812
1813
1813 def hook(self, name, throw=False, **args):
1814 def hook(self, name, throw=False, **args):
1814 """Call a hook, passing this repo instance.
1815 """Call a hook, passing this repo instance.
1815
1816
1816 This a convenience method to aid invoking hooks. Extensions likely
1817 This a convenience method to aid invoking hooks. Extensions likely
1817 won't call this unless they have registered a custom hook or are
1818 won't call this unless they have registered a custom hook or are
1818 replacing code that is expected to call a hook.
1819 replacing code that is expected to call a hook.
1819 """
1820 """
1820 return hook.hook(self.ui, self, name, throw, **args)
1821 return hook.hook(self.ui, self, name, throw, **args)
1821
1822
1822 @filteredpropertycache
1823 @filteredpropertycache
1823 def _tagscache(self):
1824 def _tagscache(self):
1824 '''Returns a tagscache object that contains various tags related
1825 '''Returns a tagscache object that contains various tags related
1825 caches.'''
1826 caches.'''
1826
1827
1827 # This simplifies its cache management by having one decorated
1828 # This simplifies its cache management by having one decorated
1828 # function (this one) and the rest simply fetch things from it.
1829 # function (this one) and the rest simply fetch things from it.
1829 class tagscache(object):
1830 class tagscache(object):
1830 def __init__(self):
1831 def __init__(self):
1831 # These two define the set of tags for this repository. tags
1832 # These two define the set of tags for this repository. tags
1832 # maps tag name to node; tagtypes maps tag name to 'global' or
1833 # maps tag name to node; tagtypes maps tag name to 'global' or
1833 # 'local'. (Global tags are defined by .hgtags across all
1834 # 'local'. (Global tags are defined by .hgtags across all
1834 # heads, and local tags are defined in .hg/localtags.)
1835 # heads, and local tags are defined in .hg/localtags.)
1835 # They constitute the in-memory cache of tags.
1836 # They constitute the in-memory cache of tags.
1836 self.tags = self.tagtypes = None
1837 self.tags = self.tagtypes = None
1837
1838
1838 self.nodetagscache = self.tagslist = None
1839 self.nodetagscache = self.tagslist = None
1839
1840
1840 cache = tagscache()
1841 cache = tagscache()
1841 cache.tags, cache.tagtypes = self._findtags()
1842 cache.tags, cache.tagtypes = self._findtags()
1842
1843
1843 return cache
1844 return cache
1844
1845
1845 def tags(self):
1846 def tags(self):
1846 '''return a mapping of tag to node'''
1847 '''return a mapping of tag to node'''
1847 t = {}
1848 t = {}
1848 if self.changelog.filteredrevs:
1849 if self.changelog.filteredrevs:
1849 tags, tt = self._findtags()
1850 tags, tt = self._findtags()
1850 else:
1851 else:
1851 tags = self._tagscache.tags
1852 tags = self._tagscache.tags
1852 rev = self.changelog.rev
1853 rev = self.changelog.rev
1853 for k, v in pycompat.iteritems(tags):
1854 for k, v in pycompat.iteritems(tags):
1854 try:
1855 try:
1855 # ignore tags to unknown nodes
1856 # ignore tags to unknown nodes
1856 rev(v)
1857 rev(v)
1857 t[k] = v
1858 t[k] = v
1858 except (error.LookupError, ValueError):
1859 except (error.LookupError, ValueError):
1859 pass
1860 pass
1860 return t
1861 return t
1861
1862
1862 def _findtags(self):
1863 def _findtags(self):
1863 '''Do the hard work of finding tags. Return a pair of dicts
1864 '''Do the hard work of finding tags. Return a pair of dicts
1864 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1865 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1865 maps tag name to a string like \'global\' or \'local\'.
1866 maps tag name to a string like \'global\' or \'local\'.
1866 Subclasses or extensions are free to add their own tags, but
1867 Subclasses or extensions are free to add their own tags, but
1867 should be aware that the returned dicts will be retained for the
1868 should be aware that the returned dicts will be retained for the
1868 duration of the localrepo object.'''
1869 duration of the localrepo object.'''
1869
1870
1870 # XXX what tagtype should subclasses/extensions use? Currently
1871 # XXX what tagtype should subclasses/extensions use? Currently
1871 # mq and bookmarks add tags, but do not set the tagtype at all.
1872 # mq and bookmarks add tags, but do not set the tagtype at all.
1872 # Should each extension invent its own tag type? Should there
1873 # Should each extension invent its own tag type? Should there
1873 # be one tagtype for all such "virtual" tags? Or is the status
1874 # be one tagtype for all such "virtual" tags? Or is the status
1874 # quo fine?
1875 # quo fine?
1875
1876
1876 # map tag name to (node, hist)
1877 # map tag name to (node, hist)
1877 alltags = tagsmod.findglobaltags(self.ui, self)
1878 alltags = tagsmod.findglobaltags(self.ui, self)
1878 # map tag name to tag type
1879 # map tag name to tag type
1879 tagtypes = {tag: b'global' for tag in alltags}
1880 tagtypes = {tag: b'global' for tag in alltags}
1880
1881
1881 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1882 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1882
1883
1883 # Build the return dicts. Have to re-encode tag names because
1884 # Build the return dicts. Have to re-encode tag names because
1884 # the tags module always uses UTF-8 (in order not to lose info
1885 # the tags module always uses UTF-8 (in order not to lose info
1885 # writing to the cache), but the rest of Mercurial wants them in
1886 # writing to the cache), but the rest of Mercurial wants them in
1886 # local encoding.
1887 # local encoding.
1887 tags = {}
1888 tags = {}
1888 for (name, (node, hist)) in pycompat.iteritems(alltags):
1889 for (name, (node, hist)) in pycompat.iteritems(alltags):
1889 if node != nullid:
1890 if node != nullid:
1890 tags[encoding.tolocal(name)] = node
1891 tags[encoding.tolocal(name)] = node
1891 tags[b'tip'] = self.changelog.tip()
1892 tags[b'tip'] = self.changelog.tip()
1892 tagtypes = {
1893 tagtypes = {
1893 encoding.tolocal(name): value
1894 encoding.tolocal(name): value
1894 for (name, value) in pycompat.iteritems(tagtypes)
1895 for (name, value) in pycompat.iteritems(tagtypes)
1895 }
1896 }
1896 return (tags, tagtypes)
1897 return (tags, tagtypes)
1897
1898
1898 def tagtype(self, tagname):
1899 def tagtype(self, tagname):
1899 '''
1900 '''
1900 return the type of the given tag. result can be:
1901 return the type of the given tag. result can be:
1901
1902
1902 'local' : a local tag
1903 'local' : a local tag
1903 'global' : a global tag
1904 'global' : a global tag
1904 None : tag does not exist
1905 None : tag does not exist
1905 '''
1906 '''
1906
1907
1907 return self._tagscache.tagtypes.get(tagname)
1908 return self._tagscache.tagtypes.get(tagname)
1908
1909
1909 def tagslist(self):
1910 def tagslist(self):
1910 '''return a list of tags ordered by revision'''
1911 '''return a list of tags ordered by revision'''
1911 if not self._tagscache.tagslist:
1912 if not self._tagscache.tagslist:
1912 l = []
1913 l = []
1913 for t, n in pycompat.iteritems(self.tags()):
1914 for t, n in pycompat.iteritems(self.tags()):
1914 l.append((self.changelog.rev(n), t, n))
1915 l.append((self.changelog.rev(n), t, n))
1915 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1916 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1916
1917
1917 return self._tagscache.tagslist
1918 return self._tagscache.tagslist
1918
1919
1919 def nodetags(self, node):
1920 def nodetags(self, node):
1920 '''return the tags associated with a node'''
1921 '''return the tags associated with a node'''
1921 if not self._tagscache.nodetagscache:
1922 if not self._tagscache.nodetagscache:
1922 nodetagscache = {}
1923 nodetagscache = {}
1923 for t, n in pycompat.iteritems(self._tagscache.tags):
1924 for t, n in pycompat.iteritems(self._tagscache.tags):
1924 nodetagscache.setdefault(n, []).append(t)
1925 nodetagscache.setdefault(n, []).append(t)
1925 for tags in pycompat.itervalues(nodetagscache):
1926 for tags in pycompat.itervalues(nodetagscache):
1926 tags.sort()
1927 tags.sort()
1927 self._tagscache.nodetagscache = nodetagscache
1928 self._tagscache.nodetagscache = nodetagscache
1928 return self._tagscache.nodetagscache.get(node, [])
1929 return self._tagscache.nodetagscache.get(node, [])
1929
1930
1930 def nodebookmarks(self, node):
1931 def nodebookmarks(self, node):
1931 """return the list of bookmarks pointing to the specified node"""
1932 """return the list of bookmarks pointing to the specified node"""
1932 return self._bookmarks.names(node)
1933 return self._bookmarks.names(node)
1933
1934
1934 def branchmap(self):
1935 def branchmap(self):
1935 '''returns a dictionary {branch: [branchheads]} with branchheads
1936 '''returns a dictionary {branch: [branchheads]} with branchheads
1936 ordered by increasing revision number'''
1937 ordered by increasing revision number'''
1937 return self._branchcaches[self]
1938 return self._branchcaches[self]
1938
1939
1939 @unfilteredmethod
1940 @unfilteredmethod
1940 def revbranchcache(self):
1941 def revbranchcache(self):
1941 if not self._revbranchcache:
1942 if not self._revbranchcache:
1942 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1943 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1943 return self._revbranchcache
1944 return self._revbranchcache
1944
1945
1945 def branchtip(self, branch, ignoremissing=False):
1946 def branchtip(self, branch, ignoremissing=False):
1946 '''return the tip node for a given branch
1947 '''return the tip node for a given branch
1947
1948
1948 If ignoremissing is True, then this method will not raise an error.
1949 If ignoremissing is True, then this method will not raise an error.
1949 This is helpful for callers that only expect None for a missing branch
1950 This is helpful for callers that only expect None for a missing branch
1950 (e.g. namespace).
1951 (e.g. namespace).
1951
1952
1952 '''
1953 '''
1953 try:
1954 try:
1954 return self.branchmap().branchtip(branch)
1955 return self.branchmap().branchtip(branch)
1955 except KeyError:
1956 except KeyError:
1956 if not ignoremissing:
1957 if not ignoremissing:
1957 raise error.RepoLookupError(_(b"unknown branch '%s'") % branch)
1958 raise error.RepoLookupError(_(b"unknown branch '%s'") % branch)
1958 else:
1959 else:
1959 pass
1960 pass
1960
1961
1961 def lookup(self, key):
1962 def lookup(self, key):
1962 node = scmutil.revsymbol(self, key).node()
1963 node = scmutil.revsymbol(self, key).node()
1963 if node is None:
1964 if node is None:
1964 raise error.RepoLookupError(_(b"unknown revision '%s'") % key)
1965 raise error.RepoLookupError(_(b"unknown revision '%s'") % key)
1965 return node
1966 return node
1966
1967
1967 def lookupbranch(self, key):
1968 def lookupbranch(self, key):
1968 if self.branchmap().hasbranch(key):
1969 if self.branchmap().hasbranch(key):
1969 return key
1970 return key
1970
1971
1971 return scmutil.revsymbol(self, key).branch()
1972 return scmutil.revsymbol(self, key).branch()
1972
1973
1973 def known(self, nodes):
1974 def known(self, nodes):
1974 cl = self.changelog
1975 cl = self.changelog
1975 get_rev = cl.index.get_rev
1976 get_rev = cl.index.get_rev
1976 filtered = cl.filteredrevs
1977 filtered = cl.filteredrevs
1977 result = []
1978 result = []
1978 for n in nodes:
1979 for n in nodes:
1979 r = get_rev(n)
1980 r = get_rev(n)
1980 resp = not (r is None or r in filtered)
1981 resp = not (r is None or r in filtered)
1981 result.append(resp)
1982 result.append(resp)
1982 return result
1983 return result
1983
1984
1984 def local(self):
1985 def local(self):
1985 return self
1986 return self
1986
1987
1987 def publishing(self):
1988 def publishing(self):
1988 # it's safe (and desirable) to trust the publish flag unconditionally
1989 # it's safe (and desirable) to trust the publish flag unconditionally
1989 # so that we don't finalize changes shared between users via ssh or nfs
1990 # so that we don't finalize changes shared between users via ssh or nfs
1990 return self.ui.configbool(b'phases', b'publish', untrusted=True)
1991 return self.ui.configbool(b'phases', b'publish', untrusted=True)
1991
1992
1992 def cancopy(self):
1993 def cancopy(self):
1993 # so statichttprepo's override of local() works
1994 # so statichttprepo's override of local() works
1994 if not self.local():
1995 if not self.local():
1995 return False
1996 return False
1996 if not self.publishing():
1997 if not self.publishing():
1997 return True
1998 return True
1998 # if publishing we can't copy if there is filtered content
1999 # if publishing we can't copy if there is filtered content
1999 return not self.filtered(b'visible').changelog.filteredrevs
2000 return not self.filtered(b'visible').changelog.filteredrevs
2000
2001
2001 def shared(self):
2002 def shared(self):
2002 '''the type of shared repository (None if not shared)'''
2003 '''the type of shared repository (None if not shared)'''
2003 if self.sharedpath != self.path:
2004 if self.sharedpath != self.path:
2004 return b'store'
2005 return b'store'
2005 return None
2006 return None
2006
2007
2007 def wjoin(self, f, *insidef):
2008 def wjoin(self, f, *insidef):
2008 return self.vfs.reljoin(self.root, f, *insidef)
2009 return self.vfs.reljoin(self.root, f, *insidef)
2009
2010
2010 def setparents(self, p1, p2=nullid):
2011 def setparents(self, p1, p2=nullid):
2011 self[None].setparents(p1, p2)
2012 self[None].setparents(p1, p2)
2012 self._quick_access_changeid_invalidate()
2013 self._quick_access_changeid_invalidate()
2013
2014
2014 def filectx(self, path, changeid=None, fileid=None, changectx=None):
2015 def filectx(self, path, changeid=None, fileid=None, changectx=None):
2015 """changeid must be a changeset revision, if specified.
2016 """changeid must be a changeset revision, if specified.
2016 fileid can be a file revision or node."""
2017 fileid can be a file revision or node."""
2017 return context.filectx(
2018 return context.filectx(
2018 self, path, changeid, fileid, changectx=changectx
2019 self, path, changeid, fileid, changectx=changectx
2019 )
2020 )
2020
2021
2021 def getcwd(self):
2022 def getcwd(self):
2022 return self.dirstate.getcwd()
2023 return self.dirstate.getcwd()
2023
2024
2024 def pathto(self, f, cwd=None):
2025 def pathto(self, f, cwd=None):
2025 return self.dirstate.pathto(f, cwd)
2026 return self.dirstate.pathto(f, cwd)
2026
2027
2027 def _loadfilter(self, filter):
2028 def _loadfilter(self, filter):
2028 if filter not in self._filterpats:
2029 if filter not in self._filterpats:
2029 l = []
2030 l = []
2030 for pat, cmd in self.ui.configitems(filter):
2031 for pat, cmd in self.ui.configitems(filter):
2031 if cmd == b'!':
2032 if cmd == b'!':
2032 continue
2033 continue
2033 mf = matchmod.match(self.root, b'', [pat])
2034 mf = matchmod.match(self.root, b'', [pat])
2034 fn = None
2035 fn = None
2035 params = cmd
2036 params = cmd
2036 for name, filterfn in pycompat.iteritems(self._datafilters):
2037 for name, filterfn in pycompat.iteritems(self._datafilters):
2037 if cmd.startswith(name):
2038 if cmd.startswith(name):
2038 fn = filterfn
2039 fn = filterfn
2039 params = cmd[len(name) :].lstrip()
2040 params = cmd[len(name) :].lstrip()
2040 break
2041 break
2041 if not fn:
2042 if not fn:
2042 fn = lambda s, c, **kwargs: procutil.filter(s, c)
2043 fn = lambda s, c, **kwargs: procutil.filter(s, c)
2043 fn.__name__ = 'commandfilter'
2044 fn.__name__ = 'commandfilter'
2044 # Wrap old filters not supporting keyword arguments
2045 # Wrap old filters not supporting keyword arguments
2045 if not pycompat.getargspec(fn)[2]:
2046 if not pycompat.getargspec(fn)[2]:
2046 oldfn = fn
2047 oldfn = fn
2047 fn = lambda s, c, oldfn=oldfn, **kwargs: oldfn(s, c)
2048 fn = lambda s, c, oldfn=oldfn, **kwargs: oldfn(s, c)
2048 fn.__name__ = 'compat-' + oldfn.__name__
2049 fn.__name__ = 'compat-' + oldfn.__name__
2049 l.append((mf, fn, params))
2050 l.append((mf, fn, params))
2050 self._filterpats[filter] = l
2051 self._filterpats[filter] = l
2051 return self._filterpats[filter]
2052 return self._filterpats[filter]
2052
2053
2053 def _filter(self, filterpats, filename, data):
2054 def _filter(self, filterpats, filename, data):
2054 for mf, fn, cmd in filterpats:
2055 for mf, fn, cmd in filterpats:
2055 if mf(filename):
2056 if mf(filename):
2056 self.ui.debug(
2057 self.ui.debug(
2057 b"filtering %s through %s\n"
2058 b"filtering %s through %s\n"
2058 % (filename, cmd or pycompat.sysbytes(fn.__name__))
2059 % (filename, cmd or pycompat.sysbytes(fn.__name__))
2059 )
2060 )
2060 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
2061 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
2061 break
2062 break
2062
2063
2063 return data
2064 return data
2064
2065
2065 @unfilteredpropertycache
2066 @unfilteredpropertycache
2066 def _encodefilterpats(self):
2067 def _encodefilterpats(self):
2067 return self._loadfilter(b'encode')
2068 return self._loadfilter(b'encode')
2068
2069
2069 @unfilteredpropertycache
2070 @unfilteredpropertycache
2070 def _decodefilterpats(self):
2071 def _decodefilterpats(self):
2071 return self._loadfilter(b'decode')
2072 return self._loadfilter(b'decode')
2072
2073
2073 def adddatafilter(self, name, filter):
2074 def adddatafilter(self, name, filter):
2074 self._datafilters[name] = filter
2075 self._datafilters[name] = filter
2075
2076
2076 def wread(self, filename):
2077 def wread(self, filename):
2077 if self.wvfs.islink(filename):
2078 if self.wvfs.islink(filename):
2078 data = self.wvfs.readlink(filename)
2079 data = self.wvfs.readlink(filename)
2079 else:
2080 else:
2080 data = self.wvfs.read(filename)
2081 data = self.wvfs.read(filename)
2081 return self._filter(self._encodefilterpats, filename, data)
2082 return self._filter(self._encodefilterpats, filename, data)
2082
2083
2083 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
2084 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
2084 """write ``data`` into ``filename`` in the working directory
2085 """write ``data`` into ``filename`` in the working directory
2085
2086
2086 This returns length of written (maybe decoded) data.
2087 This returns length of written (maybe decoded) data.
2087 """
2088 """
2088 data = self._filter(self._decodefilterpats, filename, data)
2089 data = self._filter(self._decodefilterpats, filename, data)
2089 if b'l' in flags:
2090 if b'l' in flags:
2090 self.wvfs.symlink(data, filename)
2091 self.wvfs.symlink(data, filename)
2091 else:
2092 else:
2092 self.wvfs.write(
2093 self.wvfs.write(
2093 filename, data, backgroundclose=backgroundclose, **kwargs
2094 filename, data, backgroundclose=backgroundclose, **kwargs
2094 )
2095 )
2095 if b'x' in flags:
2096 if b'x' in flags:
2096 self.wvfs.setflags(filename, False, True)
2097 self.wvfs.setflags(filename, False, True)
2097 else:
2098 else:
2098 self.wvfs.setflags(filename, False, False)
2099 self.wvfs.setflags(filename, False, False)
2099 return len(data)
2100 return len(data)
2100
2101
2101 def wwritedata(self, filename, data):
2102 def wwritedata(self, filename, data):
2102 return self._filter(self._decodefilterpats, filename, data)
2103 return self._filter(self._decodefilterpats, filename, data)
2103
2104
2104 def currenttransaction(self):
2105 def currenttransaction(self):
2105 """return the current transaction or None if non exists"""
2106 """return the current transaction or None if non exists"""
2106 if self._transref:
2107 if self._transref:
2107 tr = self._transref()
2108 tr = self._transref()
2108 else:
2109 else:
2109 tr = None
2110 tr = None
2110
2111
2111 if tr and tr.running():
2112 if tr and tr.running():
2112 return tr
2113 return tr
2113 return None
2114 return None
2114
2115
2115 def transaction(self, desc, report=None):
2116 def transaction(self, desc, report=None):
2116 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
2117 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
2117 b'devel', b'check-locks'
2118 b'devel', b'check-locks'
2118 ):
2119 ):
2119 if self._currentlock(self._lockref) is None:
2120 if self._currentlock(self._lockref) is None:
2120 raise error.ProgrammingError(b'transaction requires locking')
2121 raise error.ProgrammingError(b'transaction requires locking')
2121 tr = self.currenttransaction()
2122 tr = self.currenttransaction()
2122 if tr is not None:
2123 if tr is not None:
2123 return tr.nest(name=desc)
2124 return tr.nest(name=desc)
2124
2125
2125 # abort here if the journal already exists
2126 # abort here if the journal already exists
2126 if self.svfs.exists(b"journal"):
2127 if self.svfs.exists(b"journal"):
2127 raise error.RepoError(
2128 raise error.RepoError(
2128 _(b"abandoned transaction found"),
2129 _(b"abandoned transaction found"),
2129 hint=_(b"run 'hg recover' to clean up transaction"),
2130 hint=_(b"run 'hg recover' to clean up transaction"),
2130 )
2131 )
2131
2132
2132 idbase = b"%.40f#%f" % (random.random(), time.time())
2133 idbase = b"%.40f#%f" % (random.random(), time.time())
2133 ha = hex(hashutil.sha1(idbase).digest())
2134 ha = hex(hashutil.sha1(idbase).digest())
2134 txnid = b'TXN:' + ha
2135 txnid = b'TXN:' + ha
2135 self.hook(b'pretxnopen', throw=True, txnname=desc, txnid=txnid)
2136 self.hook(b'pretxnopen', throw=True, txnname=desc, txnid=txnid)
2136
2137
2137 self._writejournal(desc)
2138 self._writejournal(desc)
2138 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
2139 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
2139 if report:
2140 if report:
2140 rp = report
2141 rp = report
2141 else:
2142 else:
2142 rp = self.ui.warn
2143 rp = self.ui.warn
2143 vfsmap = {b'plain': self.vfs, b'store': self.svfs} # root of .hg/
2144 vfsmap = {b'plain': self.vfs, b'store': self.svfs} # root of .hg/
2144 # we must avoid cyclic reference between repo and transaction.
2145 # we must avoid cyclic reference between repo and transaction.
2145 reporef = weakref.ref(self)
2146 reporef = weakref.ref(self)
2146 # Code to track tag movement
2147 # Code to track tag movement
2147 #
2148 #
2148 # Since tags are all handled as file content, it is actually quite hard
2149 # Since tags are all handled as file content, it is actually quite hard
2149 # to track these movement from a code perspective. So we fallback to a
2150 # to track these movement from a code perspective. So we fallback to a
2150 # tracking at the repository level. One could envision to track changes
2151 # tracking at the repository level. One could envision to track changes
2151 # to the '.hgtags' file through changegroup apply but that fails to
2152 # to the '.hgtags' file through changegroup apply but that fails to
2152 # cope with case where transaction expose new heads without changegroup
2153 # cope with case where transaction expose new heads without changegroup
2153 # being involved (eg: phase movement).
2154 # being involved (eg: phase movement).
2154 #
2155 #
2155 # For now, We gate the feature behind a flag since this likely comes
2156 # For now, We gate the feature behind a flag since this likely comes
2156 # with performance impacts. The current code run more often than needed
2157 # with performance impacts. The current code run more often than needed
2157 # and do not use caches as much as it could. The current focus is on
2158 # and do not use caches as much as it could. The current focus is on
2158 # the behavior of the feature so we disable it by default. The flag
2159 # the behavior of the feature so we disable it by default. The flag
2159 # will be removed when we are happy with the performance impact.
2160 # will be removed when we are happy with the performance impact.
2160 #
2161 #
2161 # Once this feature is no longer experimental move the following
2162 # Once this feature is no longer experimental move the following
2162 # documentation to the appropriate help section:
2163 # documentation to the appropriate help section:
2163 #
2164 #
2164 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
2165 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
2165 # tags (new or changed or deleted tags). In addition the details of
2166 # tags (new or changed or deleted tags). In addition the details of
2166 # these changes are made available in a file at:
2167 # these changes are made available in a file at:
2167 # ``REPOROOT/.hg/changes/tags.changes``.
2168 # ``REPOROOT/.hg/changes/tags.changes``.
2168 # Make sure you check for HG_TAG_MOVED before reading that file as it
2169 # Make sure you check for HG_TAG_MOVED before reading that file as it
2169 # might exist from a previous transaction even if no tag were touched
2170 # might exist from a previous transaction even if no tag were touched
2170 # in this one. Changes are recorded in a line base format::
2171 # in this one. Changes are recorded in a line base format::
2171 #
2172 #
2172 # <action> <hex-node> <tag-name>\n
2173 # <action> <hex-node> <tag-name>\n
2173 #
2174 #
2174 # Actions are defined as follow:
2175 # Actions are defined as follow:
2175 # "-R": tag is removed,
2176 # "-R": tag is removed,
2176 # "+A": tag is added,
2177 # "+A": tag is added,
2177 # "-M": tag is moved (old value),
2178 # "-M": tag is moved (old value),
2178 # "+M": tag is moved (new value),
2179 # "+M": tag is moved (new value),
2179 tracktags = lambda x: None
2180 tracktags = lambda x: None
2180 # experimental config: experimental.hook-track-tags
2181 # experimental config: experimental.hook-track-tags
2181 shouldtracktags = self.ui.configbool(
2182 shouldtracktags = self.ui.configbool(
2182 b'experimental', b'hook-track-tags'
2183 b'experimental', b'hook-track-tags'
2183 )
2184 )
2184 if desc != b'strip' and shouldtracktags:
2185 if desc != b'strip' and shouldtracktags:
2185 oldheads = self.changelog.headrevs()
2186 oldheads = self.changelog.headrevs()
2186
2187
2187 def tracktags(tr2):
2188 def tracktags(tr2):
2188 repo = reporef()
2189 repo = reporef()
2189 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
2190 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
2190 newheads = repo.changelog.headrevs()
2191 newheads = repo.changelog.headrevs()
2191 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
2192 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
2192 # notes: we compare lists here.
2193 # notes: we compare lists here.
2193 # As we do it only once buiding set would not be cheaper
2194 # As we do it only once buiding set would not be cheaper
2194 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
2195 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
2195 if changes:
2196 if changes:
2196 tr2.hookargs[b'tag_moved'] = b'1'
2197 tr2.hookargs[b'tag_moved'] = b'1'
2197 with repo.vfs(
2198 with repo.vfs(
2198 b'changes/tags.changes', b'w', atomictemp=True
2199 b'changes/tags.changes', b'w', atomictemp=True
2199 ) as changesfile:
2200 ) as changesfile:
2200 # note: we do not register the file to the transaction
2201 # note: we do not register the file to the transaction
2201 # because we needs it to still exist on the transaction
2202 # because we needs it to still exist on the transaction
2202 # is close (for txnclose hooks)
2203 # is close (for txnclose hooks)
2203 tagsmod.writediff(changesfile, changes)
2204 tagsmod.writediff(changesfile, changes)
2204
2205
2205 def validate(tr2):
2206 def validate(tr2):
2206 """will run pre-closing hooks"""
2207 """will run pre-closing hooks"""
2207 # XXX the transaction API is a bit lacking here so we take a hacky
2208 # XXX the transaction API is a bit lacking here so we take a hacky
2208 # path for now
2209 # path for now
2209 #
2210 #
2210 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
2211 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
2211 # dict is copied before these run. In addition we needs the data
2212 # dict is copied before these run. In addition we needs the data
2212 # available to in memory hooks too.
2213 # available to in memory hooks too.
2213 #
2214 #
2214 # Moreover, we also need to make sure this runs before txnclose
2215 # Moreover, we also need to make sure this runs before txnclose
2215 # hooks and there is no "pending" mechanism that would execute
2216 # hooks and there is no "pending" mechanism that would execute
2216 # logic only if hooks are about to run.
2217 # logic only if hooks are about to run.
2217 #
2218 #
2218 # Fixing this limitation of the transaction is also needed to track
2219 # Fixing this limitation of the transaction is also needed to track
2219 # other families of changes (bookmarks, phases, obsolescence).
2220 # other families of changes (bookmarks, phases, obsolescence).
2220 #
2221 #
2221 # This will have to be fixed before we remove the experimental
2222 # This will have to be fixed before we remove the experimental
2222 # gating.
2223 # gating.
2223 tracktags(tr2)
2224 tracktags(tr2)
2224 repo = reporef()
2225 repo = reporef()
2225
2226
2226 singleheadopt = (b'experimental', b'single-head-per-branch')
2227 singleheadopt = (b'experimental', b'single-head-per-branch')
2227 singlehead = repo.ui.configbool(*singleheadopt)
2228 singlehead = repo.ui.configbool(*singleheadopt)
2228 if singlehead:
2229 if singlehead:
2229 singleheadsub = repo.ui.configsuboptions(*singleheadopt)[1]
2230 singleheadsub = repo.ui.configsuboptions(*singleheadopt)[1]
2230 accountclosed = singleheadsub.get(
2231 accountclosed = singleheadsub.get(
2231 b"account-closed-heads", False
2232 b"account-closed-heads", False
2232 )
2233 )
2233 scmutil.enforcesinglehead(repo, tr2, desc, accountclosed)
2234 scmutil.enforcesinglehead(repo, tr2, desc, accountclosed)
2234 if hook.hashook(repo.ui, b'pretxnclose-bookmark'):
2235 if hook.hashook(repo.ui, b'pretxnclose-bookmark'):
2235 for name, (old, new) in sorted(
2236 for name, (old, new) in sorted(
2236 tr.changes[b'bookmarks'].items()
2237 tr.changes[b'bookmarks'].items()
2237 ):
2238 ):
2238 args = tr.hookargs.copy()
2239 args = tr.hookargs.copy()
2239 args.update(bookmarks.preparehookargs(name, old, new))
2240 args.update(bookmarks.preparehookargs(name, old, new))
2240 repo.hook(
2241 repo.hook(
2241 b'pretxnclose-bookmark',
2242 b'pretxnclose-bookmark',
2242 throw=True,
2243 throw=True,
2243 **pycompat.strkwargs(args)
2244 **pycompat.strkwargs(args)
2244 )
2245 )
2245 if hook.hashook(repo.ui, b'pretxnclose-phase'):
2246 if hook.hashook(repo.ui, b'pretxnclose-phase'):
2246 cl = repo.unfiltered().changelog
2247 cl = repo.unfiltered().changelog
2247 for revs, (old, new) in tr.changes[b'phases']:
2248 for revs, (old, new) in tr.changes[b'phases']:
2248 for rev in revs:
2249 for rev in revs:
2249 args = tr.hookargs.copy()
2250 args = tr.hookargs.copy()
2250 node = hex(cl.node(rev))
2251 node = hex(cl.node(rev))
2251 args.update(phases.preparehookargs(node, old, new))
2252 args.update(phases.preparehookargs(node, old, new))
2252 repo.hook(
2253 repo.hook(
2253 b'pretxnclose-phase',
2254 b'pretxnclose-phase',
2254 throw=True,
2255 throw=True,
2255 **pycompat.strkwargs(args)
2256 **pycompat.strkwargs(args)
2256 )
2257 )
2257
2258
2258 repo.hook(
2259 repo.hook(
2259 b'pretxnclose', throw=True, **pycompat.strkwargs(tr.hookargs)
2260 b'pretxnclose', throw=True, **pycompat.strkwargs(tr.hookargs)
2260 )
2261 )
2261
2262
2262 def releasefn(tr, success):
2263 def releasefn(tr, success):
2263 repo = reporef()
2264 repo = reporef()
2264 if repo is None:
2265 if repo is None:
2265 # If the repo has been GC'd (and this release function is being
2266 # If the repo has been GC'd (and this release function is being
2266 # called from transaction.__del__), there's not much we can do,
2267 # called from transaction.__del__), there's not much we can do,
2267 # so just leave the unfinished transaction there and let the
2268 # so just leave the unfinished transaction there and let the
2268 # user run `hg recover`.
2269 # user run `hg recover`.
2269 return
2270 return
2270 if success:
2271 if success:
2271 # this should be explicitly invoked here, because
2272 # this should be explicitly invoked here, because
2272 # in-memory changes aren't written out at closing
2273 # in-memory changes aren't written out at closing
2273 # transaction, if tr.addfilegenerator (via
2274 # transaction, if tr.addfilegenerator (via
2274 # dirstate.write or so) isn't invoked while
2275 # dirstate.write or so) isn't invoked while
2275 # transaction running
2276 # transaction running
2276 repo.dirstate.write(None)
2277 repo.dirstate.write(None)
2277 else:
2278 else:
2278 # discard all changes (including ones already written
2279 # discard all changes (including ones already written
2279 # out) in this transaction
2280 # out) in this transaction
2280 narrowspec.restorebackup(self, b'journal.narrowspec')
2281 narrowspec.restorebackup(self, b'journal.narrowspec')
2281 narrowspec.restorewcbackup(self, b'journal.narrowspec.dirstate')
2282 narrowspec.restorewcbackup(self, b'journal.narrowspec.dirstate')
2282 repo.dirstate.restorebackup(None, b'journal.dirstate')
2283 repo.dirstate.restorebackup(None, b'journal.dirstate')
2283
2284
2284 repo.invalidate(clearfilecache=True)
2285 repo.invalidate(clearfilecache=True)
2285
2286
2286 tr = transaction.transaction(
2287 tr = transaction.transaction(
2287 rp,
2288 rp,
2288 self.svfs,
2289 self.svfs,
2289 vfsmap,
2290 vfsmap,
2290 b"journal",
2291 b"journal",
2291 b"undo",
2292 b"undo",
2292 aftertrans(renames),
2293 aftertrans(renames),
2293 self.store.createmode,
2294 self.store.createmode,
2294 validator=validate,
2295 validator=validate,
2295 releasefn=releasefn,
2296 releasefn=releasefn,
2296 checkambigfiles=_cachedfiles,
2297 checkambigfiles=_cachedfiles,
2297 name=desc,
2298 name=desc,
2298 )
2299 )
2299 tr.changes[b'origrepolen'] = len(self)
2300 tr.changes[b'origrepolen'] = len(self)
2300 tr.changes[b'obsmarkers'] = set()
2301 tr.changes[b'obsmarkers'] = set()
2301 tr.changes[b'phases'] = []
2302 tr.changes[b'phases'] = []
2302 tr.changes[b'bookmarks'] = {}
2303 tr.changes[b'bookmarks'] = {}
2303
2304
2304 tr.hookargs[b'txnid'] = txnid
2305 tr.hookargs[b'txnid'] = txnid
2305 tr.hookargs[b'txnname'] = desc
2306 tr.hookargs[b'txnname'] = desc
2306 tr.hookargs[b'changes'] = tr.changes
2307 tr.hookargs[b'changes'] = tr.changes
2307 # note: writing the fncache only during finalize mean that the file is
2308 # note: writing the fncache only during finalize mean that the file is
2308 # outdated when running hooks. As fncache is used for streaming clone,
2309 # outdated when running hooks. As fncache is used for streaming clone,
2309 # this is not expected to break anything that happen during the hooks.
2310 # this is not expected to break anything that happen during the hooks.
2310 tr.addfinalize(b'flush-fncache', self.store.write)
2311 tr.addfinalize(b'flush-fncache', self.store.write)
2311
2312
2312 def txnclosehook(tr2):
2313 def txnclosehook(tr2):
2313 """To be run if transaction is successful, will schedule a hook run
2314 """To be run if transaction is successful, will schedule a hook run
2314 """
2315 """
2315 # Don't reference tr2 in hook() so we don't hold a reference.
2316 # Don't reference tr2 in hook() so we don't hold a reference.
2316 # This reduces memory consumption when there are multiple
2317 # This reduces memory consumption when there are multiple
2317 # transactions per lock. This can likely go away if issue5045
2318 # transactions per lock. This can likely go away if issue5045
2318 # fixes the function accumulation.
2319 # fixes the function accumulation.
2319 hookargs = tr2.hookargs
2320 hookargs = tr2.hookargs
2320
2321
2321 def hookfunc(unused_success):
2322 def hookfunc(unused_success):
2322 repo = reporef()
2323 repo = reporef()
2323 if hook.hashook(repo.ui, b'txnclose-bookmark'):
2324 if hook.hashook(repo.ui, b'txnclose-bookmark'):
2324 bmchanges = sorted(tr.changes[b'bookmarks'].items())
2325 bmchanges = sorted(tr.changes[b'bookmarks'].items())
2325 for name, (old, new) in bmchanges:
2326 for name, (old, new) in bmchanges:
2326 args = tr.hookargs.copy()
2327 args = tr.hookargs.copy()
2327 args.update(bookmarks.preparehookargs(name, old, new))
2328 args.update(bookmarks.preparehookargs(name, old, new))
2328 repo.hook(
2329 repo.hook(
2329 b'txnclose-bookmark',
2330 b'txnclose-bookmark',
2330 throw=False,
2331 throw=False,
2331 **pycompat.strkwargs(args)
2332 **pycompat.strkwargs(args)
2332 )
2333 )
2333
2334
2334 if hook.hashook(repo.ui, b'txnclose-phase'):
2335 if hook.hashook(repo.ui, b'txnclose-phase'):
2335 cl = repo.unfiltered().changelog
2336 cl = repo.unfiltered().changelog
2336 phasemv = sorted(
2337 phasemv = sorted(
2337 tr.changes[b'phases'], key=lambda r: r[0][0]
2338 tr.changes[b'phases'], key=lambda r: r[0][0]
2338 )
2339 )
2339 for revs, (old, new) in phasemv:
2340 for revs, (old, new) in phasemv:
2340 for rev in revs:
2341 for rev in revs:
2341 args = tr.hookargs.copy()
2342 args = tr.hookargs.copy()
2342 node = hex(cl.node(rev))
2343 node = hex(cl.node(rev))
2343 args.update(phases.preparehookargs(node, old, new))
2344 args.update(phases.preparehookargs(node, old, new))
2344 repo.hook(
2345 repo.hook(
2345 b'txnclose-phase',
2346 b'txnclose-phase',
2346 throw=False,
2347 throw=False,
2347 **pycompat.strkwargs(args)
2348 **pycompat.strkwargs(args)
2348 )
2349 )
2349
2350
2350 repo.hook(
2351 repo.hook(
2351 b'txnclose', throw=False, **pycompat.strkwargs(hookargs)
2352 b'txnclose', throw=False, **pycompat.strkwargs(hookargs)
2352 )
2353 )
2353
2354
2354 reporef()._afterlock(hookfunc)
2355 reporef()._afterlock(hookfunc)
2355
2356
2356 tr.addfinalize(b'txnclose-hook', txnclosehook)
2357 tr.addfinalize(b'txnclose-hook', txnclosehook)
2357 # Include a leading "-" to make it happen before the transaction summary
2358 # Include a leading "-" to make it happen before the transaction summary
2358 # reports registered via scmutil.registersummarycallback() whose names
2359 # reports registered via scmutil.registersummarycallback() whose names
2359 # are 00-txnreport etc. That way, the caches will be warm when the
2360 # are 00-txnreport etc. That way, the caches will be warm when the
2360 # callbacks run.
2361 # callbacks run.
2361 tr.addpostclose(b'-warm-cache', self._buildcacheupdater(tr))
2362 tr.addpostclose(b'-warm-cache', self._buildcacheupdater(tr))
2362
2363
2363 def txnaborthook(tr2):
2364 def txnaborthook(tr2):
2364 """To be run if transaction is aborted
2365 """To be run if transaction is aborted
2365 """
2366 """
2366 reporef().hook(
2367 reporef().hook(
2367 b'txnabort', throw=False, **pycompat.strkwargs(tr2.hookargs)
2368 b'txnabort', throw=False, **pycompat.strkwargs(tr2.hookargs)
2368 )
2369 )
2369
2370
2370 tr.addabort(b'txnabort-hook', txnaborthook)
2371 tr.addabort(b'txnabort-hook', txnaborthook)
2371 # avoid eager cache invalidation. in-memory data should be identical
2372 # avoid eager cache invalidation. in-memory data should be identical
2372 # to stored data if transaction has no error.
2373 # to stored data if transaction has no error.
2373 tr.addpostclose(b'refresh-filecachestats', self._refreshfilecachestats)
2374 tr.addpostclose(b'refresh-filecachestats', self._refreshfilecachestats)
2374 self._transref = weakref.ref(tr)
2375 self._transref = weakref.ref(tr)
2375 scmutil.registersummarycallback(self, tr, desc)
2376 scmutil.registersummarycallback(self, tr, desc)
2376 return tr
2377 return tr
2377
2378
2378 def _journalfiles(self):
2379 def _journalfiles(self):
2379 return (
2380 return (
2380 (self.svfs, b'journal'),
2381 (self.svfs, b'journal'),
2381 (self.svfs, b'journal.narrowspec'),
2382 (self.svfs, b'journal.narrowspec'),
2382 (self.vfs, b'journal.narrowspec.dirstate'),
2383 (self.vfs, b'journal.narrowspec.dirstate'),
2383 (self.vfs, b'journal.dirstate'),
2384 (self.vfs, b'journal.dirstate'),
2384 (self.vfs, b'journal.branch'),
2385 (self.vfs, b'journal.branch'),
2385 (self.vfs, b'journal.desc'),
2386 (self.vfs, b'journal.desc'),
2386 (bookmarks.bookmarksvfs(self), b'journal.bookmarks'),
2387 (bookmarks.bookmarksvfs(self), b'journal.bookmarks'),
2387 (self.svfs, b'journal.phaseroots'),
2388 (self.svfs, b'journal.phaseroots'),
2388 )
2389 )
2389
2390
2390 def undofiles(self):
2391 def undofiles(self):
2391 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
2392 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
2392
2393
2393 @unfilteredmethod
2394 @unfilteredmethod
2394 def _writejournal(self, desc):
2395 def _writejournal(self, desc):
2395 self.dirstate.savebackup(None, b'journal.dirstate')
2396 self.dirstate.savebackup(None, b'journal.dirstate')
2396 narrowspec.savewcbackup(self, b'journal.narrowspec.dirstate')
2397 narrowspec.savewcbackup(self, b'journal.narrowspec.dirstate')
2397 narrowspec.savebackup(self, b'journal.narrowspec')
2398 narrowspec.savebackup(self, b'journal.narrowspec')
2398 self.vfs.write(
2399 self.vfs.write(
2399 b"journal.branch", encoding.fromlocal(self.dirstate.branch())
2400 b"journal.branch", encoding.fromlocal(self.dirstate.branch())
2400 )
2401 )
2401 self.vfs.write(b"journal.desc", b"%d\n%s\n" % (len(self), desc))
2402 self.vfs.write(b"journal.desc", b"%d\n%s\n" % (len(self), desc))
2402 bookmarksvfs = bookmarks.bookmarksvfs(self)
2403 bookmarksvfs = bookmarks.bookmarksvfs(self)
2403 bookmarksvfs.write(
2404 bookmarksvfs.write(
2404 b"journal.bookmarks", bookmarksvfs.tryread(b"bookmarks")
2405 b"journal.bookmarks", bookmarksvfs.tryread(b"bookmarks")
2405 )
2406 )
2406 self.svfs.write(b"journal.phaseroots", self.svfs.tryread(b"phaseroots"))
2407 self.svfs.write(b"journal.phaseroots", self.svfs.tryread(b"phaseroots"))
2407
2408
2408 def recover(self):
2409 def recover(self):
2409 with self.lock():
2410 with self.lock():
2410 if self.svfs.exists(b"journal"):
2411 if self.svfs.exists(b"journal"):
2411 self.ui.status(_(b"rolling back interrupted transaction\n"))
2412 self.ui.status(_(b"rolling back interrupted transaction\n"))
2412 vfsmap = {
2413 vfsmap = {
2413 b'': self.svfs,
2414 b'': self.svfs,
2414 b'plain': self.vfs,
2415 b'plain': self.vfs,
2415 }
2416 }
2416 transaction.rollback(
2417 transaction.rollback(
2417 self.svfs,
2418 self.svfs,
2418 vfsmap,
2419 vfsmap,
2419 b"journal",
2420 b"journal",
2420 self.ui.warn,
2421 self.ui.warn,
2421 checkambigfiles=_cachedfiles,
2422 checkambigfiles=_cachedfiles,
2422 )
2423 )
2423 self.invalidate()
2424 self.invalidate()
2424 return True
2425 return True
2425 else:
2426 else:
2426 self.ui.warn(_(b"no interrupted transaction available\n"))
2427 self.ui.warn(_(b"no interrupted transaction available\n"))
2427 return False
2428 return False
2428
2429
2429 def rollback(self, dryrun=False, force=False):
2430 def rollback(self, dryrun=False, force=False):
2430 wlock = lock = dsguard = None
2431 wlock = lock = dsguard = None
2431 try:
2432 try:
2432 wlock = self.wlock()
2433 wlock = self.wlock()
2433 lock = self.lock()
2434 lock = self.lock()
2434 if self.svfs.exists(b"undo"):
2435 if self.svfs.exists(b"undo"):
2435 dsguard = dirstateguard.dirstateguard(self, b'rollback')
2436 dsguard = dirstateguard.dirstateguard(self, b'rollback')
2436
2437
2437 return self._rollback(dryrun, force, dsguard)
2438 return self._rollback(dryrun, force, dsguard)
2438 else:
2439 else:
2439 self.ui.warn(_(b"no rollback information available\n"))
2440 self.ui.warn(_(b"no rollback information available\n"))
2440 return 1
2441 return 1
2441 finally:
2442 finally:
2442 release(dsguard, lock, wlock)
2443 release(dsguard, lock, wlock)
2443
2444
2444 @unfilteredmethod # Until we get smarter cache management
2445 @unfilteredmethod # Until we get smarter cache management
2445 def _rollback(self, dryrun, force, dsguard):
2446 def _rollback(self, dryrun, force, dsguard):
2446 ui = self.ui
2447 ui = self.ui
2447 try:
2448 try:
2448 args = self.vfs.read(b'undo.desc').splitlines()
2449 args = self.vfs.read(b'undo.desc').splitlines()
2449 (oldlen, desc, detail) = (int(args[0]), args[1], None)
2450 (oldlen, desc, detail) = (int(args[0]), args[1], None)
2450 if len(args) >= 3:
2451 if len(args) >= 3:
2451 detail = args[2]
2452 detail = args[2]
2452 oldtip = oldlen - 1
2453 oldtip = oldlen - 1
2453
2454
2454 if detail and ui.verbose:
2455 if detail and ui.verbose:
2455 msg = _(
2456 msg = _(
2456 b'repository tip rolled back to revision %d'
2457 b'repository tip rolled back to revision %d'
2457 b' (undo %s: %s)\n'
2458 b' (undo %s: %s)\n'
2458 ) % (oldtip, desc, detail)
2459 ) % (oldtip, desc, detail)
2459 else:
2460 else:
2460 msg = _(
2461 msg = _(
2461 b'repository tip rolled back to revision %d (undo %s)\n'
2462 b'repository tip rolled back to revision %d (undo %s)\n'
2462 ) % (oldtip, desc)
2463 ) % (oldtip, desc)
2463 except IOError:
2464 except IOError:
2464 msg = _(b'rolling back unknown transaction\n')
2465 msg = _(b'rolling back unknown transaction\n')
2465 desc = None
2466 desc = None
2466
2467
2467 if not force and self[b'.'] != self[b'tip'] and desc == b'commit':
2468 if not force and self[b'.'] != self[b'tip'] and desc == b'commit':
2468 raise error.Abort(
2469 raise error.Abort(
2469 _(
2470 _(
2470 b'rollback of last commit while not checked out '
2471 b'rollback of last commit while not checked out '
2471 b'may lose data'
2472 b'may lose data'
2472 ),
2473 ),
2473 hint=_(b'use -f to force'),
2474 hint=_(b'use -f to force'),
2474 )
2475 )
2475
2476
2476 ui.status(msg)
2477 ui.status(msg)
2477 if dryrun:
2478 if dryrun:
2478 return 0
2479 return 0
2479
2480
2480 parents = self.dirstate.parents()
2481 parents = self.dirstate.parents()
2481 self.destroying()
2482 self.destroying()
2482 vfsmap = {b'plain': self.vfs, b'': self.svfs}
2483 vfsmap = {b'plain': self.vfs, b'': self.svfs}
2483 transaction.rollback(
2484 transaction.rollback(
2484 self.svfs, vfsmap, b'undo', ui.warn, checkambigfiles=_cachedfiles
2485 self.svfs, vfsmap, b'undo', ui.warn, checkambigfiles=_cachedfiles
2485 )
2486 )
2486 bookmarksvfs = bookmarks.bookmarksvfs(self)
2487 bookmarksvfs = bookmarks.bookmarksvfs(self)
2487 if bookmarksvfs.exists(b'undo.bookmarks'):
2488 if bookmarksvfs.exists(b'undo.bookmarks'):
2488 bookmarksvfs.rename(
2489 bookmarksvfs.rename(
2489 b'undo.bookmarks', b'bookmarks', checkambig=True
2490 b'undo.bookmarks', b'bookmarks', checkambig=True
2490 )
2491 )
2491 if self.svfs.exists(b'undo.phaseroots'):
2492 if self.svfs.exists(b'undo.phaseroots'):
2492 self.svfs.rename(b'undo.phaseroots', b'phaseroots', checkambig=True)
2493 self.svfs.rename(b'undo.phaseroots', b'phaseroots', checkambig=True)
2493 self.invalidate()
2494 self.invalidate()
2494
2495
2495 has_node = self.changelog.index.has_node
2496 has_node = self.changelog.index.has_node
2496 parentgone = any(not has_node(p) for p in parents)
2497 parentgone = any(not has_node(p) for p in parents)
2497 if parentgone:
2498 if parentgone:
2498 # prevent dirstateguard from overwriting already restored one
2499 # prevent dirstateguard from overwriting already restored one
2499 dsguard.close()
2500 dsguard.close()
2500
2501
2501 narrowspec.restorebackup(self, b'undo.narrowspec')
2502 narrowspec.restorebackup(self, b'undo.narrowspec')
2502 narrowspec.restorewcbackup(self, b'undo.narrowspec.dirstate')
2503 narrowspec.restorewcbackup(self, b'undo.narrowspec.dirstate')
2503 self.dirstate.restorebackup(None, b'undo.dirstate')
2504 self.dirstate.restorebackup(None, b'undo.dirstate')
2504 try:
2505 try:
2505 branch = self.vfs.read(b'undo.branch')
2506 branch = self.vfs.read(b'undo.branch')
2506 self.dirstate.setbranch(encoding.tolocal(branch))
2507 self.dirstate.setbranch(encoding.tolocal(branch))
2507 except IOError:
2508 except IOError:
2508 ui.warn(
2509 ui.warn(
2509 _(
2510 _(
2510 b'named branch could not be reset: '
2511 b'named branch could not be reset: '
2511 b'current branch is still \'%s\'\n'
2512 b'current branch is still \'%s\'\n'
2512 )
2513 )
2513 % self.dirstate.branch()
2514 % self.dirstate.branch()
2514 )
2515 )
2515
2516
2516 parents = tuple([p.rev() for p in self[None].parents()])
2517 parents = tuple([p.rev() for p in self[None].parents()])
2517 if len(parents) > 1:
2518 if len(parents) > 1:
2518 ui.status(
2519 ui.status(
2519 _(
2520 _(
2520 b'working directory now based on '
2521 b'working directory now based on '
2521 b'revisions %d and %d\n'
2522 b'revisions %d and %d\n'
2522 )
2523 )
2523 % parents
2524 % parents
2524 )
2525 )
2525 else:
2526 else:
2526 ui.status(
2527 ui.status(
2527 _(b'working directory now based on revision %d\n') % parents
2528 _(b'working directory now based on revision %d\n') % parents
2528 )
2529 )
2529 mergestatemod.mergestate.clean(self)
2530 mergestatemod.mergestate.clean(self)
2530
2531
2531 # TODO: if we know which new heads may result from this rollback, pass
2532 # TODO: if we know which new heads may result from this rollback, pass
2532 # them to destroy(), which will prevent the branchhead cache from being
2533 # them to destroy(), which will prevent the branchhead cache from being
2533 # invalidated.
2534 # invalidated.
2534 self.destroyed()
2535 self.destroyed()
2535 return 0
2536 return 0
2536
2537
2537 def _buildcacheupdater(self, newtransaction):
2538 def _buildcacheupdater(self, newtransaction):
2538 """called during transaction to build the callback updating cache
2539 """called during transaction to build the callback updating cache
2539
2540
2540 Lives on the repository to help extension who might want to augment
2541 Lives on the repository to help extension who might want to augment
2541 this logic. For this purpose, the created transaction is passed to the
2542 this logic. For this purpose, the created transaction is passed to the
2542 method.
2543 method.
2543 """
2544 """
2544 # we must avoid cyclic reference between repo and transaction.
2545 # we must avoid cyclic reference between repo and transaction.
2545 reporef = weakref.ref(self)
2546 reporef = weakref.ref(self)
2546
2547
2547 def updater(tr):
2548 def updater(tr):
2548 repo = reporef()
2549 repo = reporef()
2549 repo.updatecaches(tr)
2550 repo.updatecaches(tr)
2550
2551
2551 return updater
2552 return updater
2552
2553
2553 @unfilteredmethod
2554 @unfilteredmethod
2554 def updatecaches(self, tr=None, full=False):
2555 def updatecaches(self, tr=None, full=False):
2555 """warm appropriate caches
2556 """warm appropriate caches
2556
2557
2557 If this function is called after a transaction closed. The transaction
2558 If this function is called after a transaction closed. The transaction
2558 will be available in the 'tr' argument. This can be used to selectively
2559 will be available in the 'tr' argument. This can be used to selectively
2559 update caches relevant to the changes in that transaction.
2560 update caches relevant to the changes in that transaction.
2560
2561
2561 If 'full' is set, make sure all caches the function knows about have
2562 If 'full' is set, make sure all caches the function knows about have
2562 up-to-date data. Even the ones usually loaded more lazily.
2563 up-to-date data. Even the ones usually loaded more lazily.
2563 """
2564 """
2564 if tr is not None and tr.hookargs.get(b'source') == b'strip':
2565 if tr is not None and tr.hookargs.get(b'source') == b'strip':
2565 # During strip, many caches are invalid but
2566 # During strip, many caches are invalid but
2566 # later call to `destroyed` will refresh them.
2567 # later call to `destroyed` will refresh them.
2567 return
2568 return
2568
2569
2569 if tr is None or tr.changes[b'origrepolen'] < len(self):
2570 if tr is None or tr.changes[b'origrepolen'] < len(self):
2570 # accessing the 'ser ved' branchmap should refresh all the others,
2571 # accessing the 'ser ved' branchmap should refresh all the others,
2571 self.ui.debug(b'updating the branch cache\n')
2572 self.ui.debug(b'updating the branch cache\n')
2572 self.filtered(b'served').branchmap()
2573 self.filtered(b'served').branchmap()
2573 self.filtered(b'served.hidden').branchmap()
2574 self.filtered(b'served.hidden').branchmap()
2574
2575
2575 if full:
2576 if full:
2576 unfi = self.unfiltered()
2577 unfi = self.unfiltered()
2577
2578
2578 self.changelog.update_caches(transaction=tr)
2579 self.changelog.update_caches(transaction=tr)
2579 self.manifestlog.update_caches(transaction=tr)
2580 self.manifestlog.update_caches(transaction=tr)
2580
2581
2581 rbc = unfi.revbranchcache()
2582 rbc = unfi.revbranchcache()
2582 for r in unfi.changelog:
2583 for r in unfi.changelog:
2583 rbc.branchinfo(r)
2584 rbc.branchinfo(r)
2584 rbc.write()
2585 rbc.write()
2585
2586
2586 # ensure the working copy parents are in the manifestfulltextcache
2587 # ensure the working copy parents are in the manifestfulltextcache
2587 for ctx in self[b'.'].parents():
2588 for ctx in self[b'.'].parents():
2588 ctx.manifest() # accessing the manifest is enough
2589 ctx.manifest() # accessing the manifest is enough
2589
2590
2590 # accessing fnode cache warms the cache
2591 # accessing fnode cache warms the cache
2591 tagsmod.fnoderevs(self.ui, unfi, unfi.changelog.revs())
2592 tagsmod.fnoderevs(self.ui, unfi, unfi.changelog.revs())
2592 # accessing tags warm the cache
2593 # accessing tags warm the cache
2593 self.tags()
2594 self.tags()
2594 self.filtered(b'served').tags()
2595 self.filtered(b'served').tags()
2595
2596
2596 # The `full` arg is documented as updating even the lazily-loaded
2597 # The `full` arg is documented as updating even the lazily-loaded
2597 # caches immediately, so we're forcing a write to cause these caches
2598 # caches immediately, so we're forcing a write to cause these caches
2598 # to be warmed up even if they haven't explicitly been requested
2599 # to be warmed up even if they haven't explicitly been requested
2599 # yet (if they've never been used by hg, they won't ever have been
2600 # yet (if they've never been used by hg, they won't ever have been
2600 # written, even if they're a subset of another kind of cache that
2601 # written, even if they're a subset of another kind of cache that
2601 # *has* been used).
2602 # *has* been used).
2602 for filt in repoview.filtertable.keys():
2603 for filt in repoview.filtertable.keys():
2603 filtered = self.filtered(filt)
2604 filtered = self.filtered(filt)
2604 filtered.branchmap().write(filtered)
2605 filtered.branchmap().write(filtered)
2605
2606
2606 def invalidatecaches(self):
2607 def invalidatecaches(self):
2607
2608
2608 if '_tagscache' in vars(self):
2609 if '_tagscache' in vars(self):
2609 # can't use delattr on proxy
2610 # can't use delattr on proxy
2610 del self.__dict__['_tagscache']
2611 del self.__dict__['_tagscache']
2611
2612
2612 self._branchcaches.clear()
2613 self._branchcaches.clear()
2613 self.invalidatevolatilesets()
2614 self.invalidatevolatilesets()
2614 self._sparsesignaturecache.clear()
2615 self._sparsesignaturecache.clear()
2615
2616
2616 def invalidatevolatilesets(self):
2617 def invalidatevolatilesets(self):
2617 self.filteredrevcache.clear()
2618 self.filteredrevcache.clear()
2618 obsolete.clearobscaches(self)
2619 obsolete.clearobscaches(self)
2619 self._quick_access_changeid_invalidate()
2620 self._quick_access_changeid_invalidate()
2620
2621
2621 def invalidatedirstate(self):
2622 def invalidatedirstate(self):
2622 '''Invalidates the dirstate, causing the next call to dirstate
2623 '''Invalidates the dirstate, causing the next call to dirstate
2623 to check if it was modified since the last time it was read,
2624 to check if it was modified since the last time it was read,
2624 rereading it if it has.
2625 rereading it if it has.
2625
2626
2626 This is different to dirstate.invalidate() that it doesn't always
2627 This is different to dirstate.invalidate() that it doesn't always
2627 rereads the dirstate. Use dirstate.invalidate() if you want to
2628 rereads the dirstate. Use dirstate.invalidate() if you want to
2628 explicitly read the dirstate again (i.e. restoring it to a previous
2629 explicitly read the dirstate again (i.e. restoring it to a previous
2629 known good state).'''
2630 known good state).'''
2630 if hasunfilteredcache(self, 'dirstate'):
2631 if hasunfilteredcache(self, 'dirstate'):
2631 for k in self.dirstate._filecache:
2632 for k in self.dirstate._filecache:
2632 try:
2633 try:
2633 delattr(self.dirstate, k)
2634 delattr(self.dirstate, k)
2634 except AttributeError:
2635 except AttributeError:
2635 pass
2636 pass
2636 delattr(self.unfiltered(), 'dirstate')
2637 delattr(self.unfiltered(), 'dirstate')
2637
2638
2638 def invalidate(self, clearfilecache=False):
2639 def invalidate(self, clearfilecache=False):
2639 '''Invalidates both store and non-store parts other than dirstate
2640 '''Invalidates both store and non-store parts other than dirstate
2640
2641
2641 If a transaction is running, invalidation of store is omitted,
2642 If a transaction is running, invalidation of store is omitted,
2642 because discarding in-memory changes might cause inconsistency
2643 because discarding in-memory changes might cause inconsistency
2643 (e.g. incomplete fncache causes unintentional failure, but
2644 (e.g. incomplete fncache causes unintentional failure, but
2644 redundant one doesn't).
2645 redundant one doesn't).
2645 '''
2646 '''
2646 unfiltered = self.unfiltered() # all file caches are stored unfiltered
2647 unfiltered = self.unfiltered() # all file caches are stored unfiltered
2647 for k in list(self._filecache.keys()):
2648 for k in list(self._filecache.keys()):
2648 # dirstate is invalidated separately in invalidatedirstate()
2649 # dirstate is invalidated separately in invalidatedirstate()
2649 if k == b'dirstate':
2650 if k == b'dirstate':
2650 continue
2651 continue
2651 if (
2652 if (
2652 k == b'changelog'
2653 k == b'changelog'
2653 and self.currenttransaction()
2654 and self.currenttransaction()
2654 and self.changelog._delayed
2655 and self.changelog._delayed
2655 ):
2656 ):
2656 # The changelog object may store unwritten revisions. We don't
2657 # The changelog object may store unwritten revisions. We don't
2657 # want to lose them.
2658 # want to lose them.
2658 # TODO: Solve the problem instead of working around it.
2659 # TODO: Solve the problem instead of working around it.
2659 continue
2660 continue
2660
2661
2661 if clearfilecache:
2662 if clearfilecache:
2662 del self._filecache[k]
2663 del self._filecache[k]
2663 try:
2664 try:
2664 delattr(unfiltered, k)
2665 delattr(unfiltered, k)
2665 except AttributeError:
2666 except AttributeError:
2666 pass
2667 pass
2667 self.invalidatecaches()
2668 self.invalidatecaches()
2668 if not self.currenttransaction():
2669 if not self.currenttransaction():
2669 # TODO: Changing contents of store outside transaction
2670 # TODO: Changing contents of store outside transaction
2670 # causes inconsistency. We should make in-memory store
2671 # causes inconsistency. We should make in-memory store
2671 # changes detectable, and abort if changed.
2672 # changes detectable, and abort if changed.
2672 self.store.invalidatecaches()
2673 self.store.invalidatecaches()
2673
2674
2674 def invalidateall(self):
2675 def invalidateall(self):
2675 '''Fully invalidates both store and non-store parts, causing the
2676 '''Fully invalidates both store and non-store parts, causing the
2676 subsequent operation to reread any outside changes.'''
2677 subsequent operation to reread any outside changes.'''
2677 # extension should hook this to invalidate its caches
2678 # extension should hook this to invalidate its caches
2678 self.invalidate()
2679 self.invalidate()
2679 self.invalidatedirstate()
2680 self.invalidatedirstate()
2680
2681
2681 @unfilteredmethod
2682 @unfilteredmethod
2682 def _refreshfilecachestats(self, tr):
2683 def _refreshfilecachestats(self, tr):
2683 """Reload stats of cached files so that they are flagged as valid"""
2684 """Reload stats of cached files so that they are flagged as valid"""
2684 for k, ce in self._filecache.items():
2685 for k, ce in self._filecache.items():
2685 k = pycompat.sysstr(k)
2686 k = pycompat.sysstr(k)
2686 if k == 'dirstate' or k not in self.__dict__:
2687 if k == 'dirstate' or k not in self.__dict__:
2687 continue
2688 continue
2688 ce.refresh()
2689 ce.refresh()
2689
2690
2690 def _lock(
2691 def _lock(
2691 self, vfs, lockname, wait, releasefn, acquirefn, desc,
2692 self, vfs, lockname, wait, releasefn, acquirefn, desc,
2692 ):
2693 ):
2693 timeout = 0
2694 timeout = 0
2694 warntimeout = 0
2695 warntimeout = 0
2695 if wait:
2696 if wait:
2696 timeout = self.ui.configint(b"ui", b"timeout")
2697 timeout = self.ui.configint(b"ui", b"timeout")
2697 warntimeout = self.ui.configint(b"ui", b"timeout.warn")
2698 warntimeout = self.ui.configint(b"ui", b"timeout.warn")
2698 # internal config: ui.signal-safe-lock
2699 # internal config: ui.signal-safe-lock
2699 signalsafe = self.ui.configbool(b'ui', b'signal-safe-lock')
2700 signalsafe = self.ui.configbool(b'ui', b'signal-safe-lock')
2700
2701
2701 l = lockmod.trylock(
2702 l = lockmod.trylock(
2702 self.ui,
2703 self.ui,
2703 vfs,
2704 vfs,
2704 lockname,
2705 lockname,
2705 timeout,
2706 timeout,
2706 warntimeout,
2707 warntimeout,
2707 releasefn=releasefn,
2708 releasefn=releasefn,
2708 acquirefn=acquirefn,
2709 acquirefn=acquirefn,
2709 desc=desc,
2710 desc=desc,
2710 signalsafe=signalsafe,
2711 signalsafe=signalsafe,
2711 )
2712 )
2712 return l
2713 return l
2713
2714
2714 def _afterlock(self, callback):
2715 def _afterlock(self, callback):
2715 """add a callback to be run when the repository is fully unlocked
2716 """add a callback to be run when the repository is fully unlocked
2716
2717
2717 The callback will be executed when the outermost lock is released
2718 The callback will be executed when the outermost lock is released
2718 (with wlock being higher level than 'lock')."""
2719 (with wlock being higher level than 'lock')."""
2719 for ref in (self._wlockref, self._lockref):
2720 for ref in (self._wlockref, self._lockref):
2720 l = ref and ref()
2721 l = ref and ref()
2721 if l and l.held:
2722 if l and l.held:
2722 l.postrelease.append(callback)
2723 l.postrelease.append(callback)
2723 break
2724 break
2724 else: # no lock have been found.
2725 else: # no lock have been found.
2725 callback(True)
2726 callback(True)
2726
2727
2727 def lock(self, wait=True):
2728 def lock(self, wait=True):
2728 '''Lock the repository store (.hg/store) and return a weak reference
2729 '''Lock the repository store (.hg/store) and return a weak reference
2729 to the lock. Use this before modifying the store (e.g. committing or
2730 to the lock. Use this before modifying the store (e.g. committing or
2730 stripping). If you are opening a transaction, get a lock as well.)
2731 stripping). If you are opening a transaction, get a lock as well.)
2731
2732
2732 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2733 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2733 'wlock' first to avoid a dead-lock hazard.'''
2734 'wlock' first to avoid a dead-lock hazard.'''
2734 l = self._currentlock(self._lockref)
2735 l = self._currentlock(self._lockref)
2735 if l is not None:
2736 if l is not None:
2736 l.lock()
2737 l.lock()
2737 return l
2738 return l
2738
2739
2739 l = self._lock(
2740 l = self._lock(
2740 vfs=self.svfs,
2741 vfs=self.svfs,
2741 lockname=b"lock",
2742 lockname=b"lock",
2742 wait=wait,
2743 wait=wait,
2743 releasefn=None,
2744 releasefn=None,
2744 acquirefn=self.invalidate,
2745 acquirefn=self.invalidate,
2745 desc=_(b'repository %s') % self.origroot,
2746 desc=_(b'repository %s') % self.origroot,
2746 )
2747 )
2747 self._lockref = weakref.ref(l)
2748 self._lockref = weakref.ref(l)
2748 return l
2749 return l
2749
2750
2750 def wlock(self, wait=True):
2751 def wlock(self, wait=True):
2751 '''Lock the non-store parts of the repository (everything under
2752 '''Lock the non-store parts of the repository (everything under
2752 .hg except .hg/store) and return a weak reference to the lock.
2753 .hg except .hg/store) and return a weak reference to the lock.
2753
2754
2754 Use this before modifying files in .hg.
2755 Use this before modifying files in .hg.
2755
2756
2756 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2757 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2757 'wlock' first to avoid a dead-lock hazard.'''
2758 'wlock' first to avoid a dead-lock hazard.'''
2758 l = self._wlockref and self._wlockref()
2759 l = self._wlockref and self._wlockref()
2759 if l is not None and l.held:
2760 if l is not None and l.held:
2760 l.lock()
2761 l.lock()
2761 return l
2762 return l
2762
2763
2763 # We do not need to check for non-waiting lock acquisition. Such
2764 # We do not need to check for non-waiting lock acquisition. Such
2764 # acquisition would not cause dead-lock as they would just fail.
2765 # acquisition would not cause dead-lock as they would just fail.
2765 if wait and (
2766 if wait and (
2766 self.ui.configbool(b'devel', b'all-warnings')
2767 self.ui.configbool(b'devel', b'all-warnings')
2767 or self.ui.configbool(b'devel', b'check-locks')
2768 or self.ui.configbool(b'devel', b'check-locks')
2768 ):
2769 ):
2769 if self._currentlock(self._lockref) is not None:
2770 if self._currentlock(self._lockref) is not None:
2770 self.ui.develwarn(b'"wlock" acquired after "lock"')
2771 self.ui.develwarn(b'"wlock" acquired after "lock"')
2771
2772
2772 def unlock():
2773 def unlock():
2773 if self.dirstate.pendingparentchange():
2774 if self.dirstate.pendingparentchange():
2774 self.dirstate.invalidate()
2775 self.dirstate.invalidate()
2775 else:
2776 else:
2776 self.dirstate.write(None)
2777 self.dirstate.write(None)
2777
2778
2778 self._filecache[b'dirstate'].refresh()
2779 self._filecache[b'dirstate'].refresh()
2779
2780
2780 l = self._lock(
2781 l = self._lock(
2781 self.vfs,
2782 self.vfs,
2782 b"wlock",
2783 b"wlock",
2783 wait,
2784 wait,
2784 unlock,
2785 unlock,
2785 self.invalidatedirstate,
2786 self.invalidatedirstate,
2786 _(b'working directory of %s') % self.origroot,
2787 _(b'working directory of %s') % self.origroot,
2787 )
2788 )
2788 self._wlockref = weakref.ref(l)
2789 self._wlockref = weakref.ref(l)
2789 return l
2790 return l
2790
2791
2791 def _currentlock(self, lockref):
2792 def _currentlock(self, lockref):
2792 """Returns the lock if it's held, or None if it's not."""
2793 """Returns the lock if it's held, or None if it's not."""
2793 if lockref is None:
2794 if lockref is None:
2794 return None
2795 return None
2795 l = lockref()
2796 l = lockref()
2796 if l is None or not l.held:
2797 if l is None or not l.held:
2797 return None
2798 return None
2798 return l
2799 return l
2799
2800
2800 def currentwlock(self):
2801 def currentwlock(self):
2801 """Returns the wlock if it's held, or None if it's not."""
2802 """Returns the wlock if it's held, or None if it's not."""
2802 return self._currentlock(self._wlockref)
2803 return self._currentlock(self._wlockref)
2803
2804
2804 def checkcommitpatterns(self, wctx, match, status, fail):
2805 def checkcommitpatterns(self, wctx, match, status, fail):
2805 """check for commit arguments that aren't committable"""
2806 """check for commit arguments that aren't committable"""
2806 if match.isexact() or match.prefix():
2807 if match.isexact() or match.prefix():
2807 matched = set(status.modified + status.added + status.removed)
2808 matched = set(status.modified + status.added + status.removed)
2808
2809
2809 for f in match.files():
2810 for f in match.files():
2810 f = self.dirstate.normalize(f)
2811 f = self.dirstate.normalize(f)
2811 if f == b'.' or f in matched or f in wctx.substate:
2812 if f == b'.' or f in matched or f in wctx.substate:
2812 continue
2813 continue
2813 if f in status.deleted:
2814 if f in status.deleted:
2814 fail(f, _(b'file not found!'))
2815 fail(f, _(b'file not found!'))
2815 # Is it a directory that exists or used to exist?
2816 # Is it a directory that exists or used to exist?
2816 if self.wvfs.isdir(f) or wctx.p1().hasdir(f):
2817 if self.wvfs.isdir(f) or wctx.p1().hasdir(f):
2817 d = f + b'/'
2818 d = f + b'/'
2818 for mf in matched:
2819 for mf in matched:
2819 if mf.startswith(d):
2820 if mf.startswith(d):
2820 break
2821 break
2821 else:
2822 else:
2822 fail(f, _(b"no match under directory!"))
2823 fail(f, _(b"no match under directory!"))
2823 elif f not in self.dirstate:
2824 elif f not in self.dirstate:
2824 fail(f, _(b"file not tracked!"))
2825 fail(f, _(b"file not tracked!"))
2825
2826
2826 @unfilteredmethod
2827 @unfilteredmethod
2827 def commit(
2828 def commit(
2828 self,
2829 self,
2829 text=b"",
2830 text=b"",
2830 user=None,
2831 user=None,
2831 date=None,
2832 date=None,
2832 match=None,
2833 match=None,
2833 force=False,
2834 force=False,
2834 editor=None,
2835 editor=None,
2835 extra=None,
2836 extra=None,
2836 ):
2837 ):
2837 """Add a new revision to current repository.
2838 """Add a new revision to current repository.
2838
2839
2839 Revision information is gathered from the working directory,
2840 Revision information is gathered from the working directory,
2840 match can be used to filter the committed files. If editor is
2841 match can be used to filter the committed files. If editor is
2841 supplied, it is called to get a commit message.
2842 supplied, it is called to get a commit message.
2842 """
2843 """
2843 if extra is None:
2844 if extra is None:
2844 extra = {}
2845 extra = {}
2845
2846
2846 def fail(f, msg):
2847 def fail(f, msg):
2847 raise error.Abort(b'%s: %s' % (f, msg))
2848 raise error.Abort(b'%s: %s' % (f, msg))
2848
2849
2849 if not match:
2850 if not match:
2850 match = matchmod.always()
2851 match = matchmod.always()
2851
2852
2852 if not force:
2853 if not force:
2853 match.bad = fail
2854 match.bad = fail
2854
2855
2855 # lock() for recent changelog (see issue4368)
2856 # lock() for recent changelog (see issue4368)
2856 with self.wlock(), self.lock():
2857 with self.wlock(), self.lock():
2857 wctx = self[None]
2858 wctx = self[None]
2858 merge = len(wctx.parents()) > 1
2859 merge = len(wctx.parents()) > 1
2859
2860
2860 if not force and merge and not match.always():
2861 if not force and merge and not match.always():
2861 raise error.Abort(
2862 raise error.Abort(
2862 _(
2863 _(
2863 b'cannot partially commit a merge '
2864 b'cannot partially commit a merge '
2864 b'(do not specify files or patterns)'
2865 b'(do not specify files or patterns)'
2865 )
2866 )
2866 )
2867 )
2867
2868
2868 status = self.status(match=match, clean=force)
2869 status = self.status(match=match, clean=force)
2869 if force:
2870 if force:
2870 status.modified.extend(
2871 status.modified.extend(
2871 status.clean
2872 status.clean
2872 ) # mq may commit clean files
2873 ) # mq may commit clean files
2873
2874
2874 # check subrepos
2875 # check subrepos
2875 subs, commitsubs, newstate = subrepoutil.precommit(
2876 subs, commitsubs, newstate = subrepoutil.precommit(
2876 self.ui, wctx, status, match, force=force
2877 self.ui, wctx, status, match, force=force
2877 )
2878 )
2878
2879
2879 # make sure all explicit patterns are matched
2880 # make sure all explicit patterns are matched
2880 if not force:
2881 if not force:
2881 self.checkcommitpatterns(wctx, match, status, fail)
2882 self.checkcommitpatterns(wctx, match, status, fail)
2882
2883
2883 cctx = context.workingcommitctx(
2884 cctx = context.workingcommitctx(
2884 self, status, text, user, date, extra
2885 self, status, text, user, date, extra
2885 )
2886 )
2886
2887
2887 ms = mergestatemod.mergestate.read(self)
2888 ms = mergestatemod.mergestate.read(self)
2888 mergeutil.checkunresolved(ms)
2889 mergeutil.checkunresolved(ms)
2889
2890
2890 # internal config: ui.allowemptycommit
2891 # internal config: ui.allowemptycommit
2891 if cctx.isempty() and not self.ui.configbool(
2892 if cctx.isempty() and not self.ui.configbool(
2892 b'ui', b'allowemptycommit'
2893 b'ui', b'allowemptycommit'
2893 ):
2894 ):
2894 self.ui.debug(b'nothing to commit, clearing merge state\n')
2895 self.ui.debug(b'nothing to commit, clearing merge state\n')
2895 ms.reset()
2896 ms.reset()
2896 return None
2897 return None
2897
2898
2898 if merge and cctx.deleted():
2899 if merge and cctx.deleted():
2899 raise error.Abort(_(b"cannot commit merge with missing files"))
2900 raise error.Abort(_(b"cannot commit merge with missing files"))
2900
2901
2901 if editor:
2902 if editor:
2902 cctx._text = editor(self, cctx, subs)
2903 cctx._text = editor(self, cctx, subs)
2903 edited = text != cctx._text
2904 edited = text != cctx._text
2904
2905
2905 # Save commit message in case this transaction gets rolled back
2906 # Save commit message in case this transaction gets rolled back
2906 # (e.g. by a pretxncommit hook). Leave the content alone on
2907 # (e.g. by a pretxncommit hook). Leave the content alone on
2907 # the assumption that the user will use the same editor again.
2908 # the assumption that the user will use the same editor again.
2908 msgfn = self.savecommitmessage(cctx._text)
2909 msgfn = self.savecommitmessage(cctx._text)
2909
2910
2910 # commit subs and write new state
2911 # commit subs and write new state
2911 if subs:
2912 if subs:
2912 uipathfn = scmutil.getuipathfn(self)
2913 uipathfn = scmutil.getuipathfn(self)
2913 for s in sorted(commitsubs):
2914 for s in sorted(commitsubs):
2914 sub = wctx.sub(s)
2915 sub = wctx.sub(s)
2915 self.ui.status(
2916 self.ui.status(
2916 _(b'committing subrepository %s\n')
2917 _(b'committing subrepository %s\n')
2917 % uipathfn(subrepoutil.subrelpath(sub))
2918 % uipathfn(subrepoutil.subrelpath(sub))
2918 )
2919 )
2919 sr = sub.commit(cctx._text, user, date)
2920 sr = sub.commit(cctx._text, user, date)
2920 newstate[s] = (newstate[s][0], sr)
2921 newstate[s] = (newstate[s][0], sr)
2921 subrepoutil.writestate(self, newstate)
2922 subrepoutil.writestate(self, newstate)
2922
2923
2923 p1, p2 = self.dirstate.parents()
2924 p1, p2 = self.dirstate.parents()
2924 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or b'')
2925 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or b'')
2925 try:
2926 try:
2926 self.hook(
2927 self.hook(
2927 b"precommit", throw=True, parent1=hookp1, parent2=hookp2
2928 b"precommit", throw=True, parent1=hookp1, parent2=hookp2
2928 )
2929 )
2929 with self.transaction(b'commit'):
2930 with self.transaction(b'commit'):
2930 ret = self.commitctx(cctx, True)
2931 ret = self.commitctx(cctx, True)
2931 # update bookmarks, dirstate and mergestate
2932 # update bookmarks, dirstate and mergestate
2932 bookmarks.update(self, [p1, p2], ret)
2933 bookmarks.update(self, [p1, p2], ret)
2933 cctx.markcommitted(ret)
2934 cctx.markcommitted(ret)
2934 ms.reset()
2935 ms.reset()
2935 except: # re-raises
2936 except: # re-raises
2936 if edited:
2937 if edited:
2937 self.ui.write(
2938 self.ui.write(
2938 _(b'note: commit message saved in %s\n') % msgfn
2939 _(b'note: commit message saved in %s\n') % msgfn
2939 )
2940 )
2940 self.ui.write(
2941 self.ui.write(
2941 _(
2942 _(
2942 b"note: use 'hg commit --logfile "
2943 b"note: use 'hg commit --logfile "
2943 b".hg/last-message.txt --edit' to reuse it\n"
2944 b".hg/last-message.txt --edit' to reuse it\n"
2944 )
2945 )
2945 )
2946 )
2946 raise
2947 raise
2947
2948
2948 def commithook(unused_success):
2949 def commithook(unused_success):
2949 # hack for command that use a temporary commit (eg: histedit)
2950 # hack for command that use a temporary commit (eg: histedit)
2950 # temporary commit got stripped before hook release
2951 # temporary commit got stripped before hook release
2951 if self.changelog.hasnode(ret):
2952 if self.changelog.hasnode(ret):
2952 self.hook(
2953 self.hook(
2953 b"commit", node=hex(ret), parent1=hookp1, parent2=hookp2
2954 b"commit", node=hex(ret), parent1=hookp1, parent2=hookp2
2954 )
2955 )
2955
2956
2956 self._afterlock(commithook)
2957 self._afterlock(commithook)
2957 return ret
2958 return ret
2958
2959
2959 @unfilteredmethod
2960 @unfilteredmethod
2960 def commitctx(self, ctx, error=False, origctx=None):
2961 def commitctx(self, ctx, error=False, origctx=None):
2961 return commit.commitctx(self, ctx, error=error, origctx=origctx)
2962 return commit.commitctx(self, ctx, error=error, origctx=origctx)
2962
2963
2963 @unfilteredmethod
2964 @unfilteredmethod
2964 def destroying(self):
2965 def destroying(self):
2965 '''Inform the repository that nodes are about to be destroyed.
2966 '''Inform the repository that nodes are about to be destroyed.
2966 Intended for use by strip and rollback, so there's a common
2967 Intended for use by strip and rollback, so there's a common
2967 place for anything that has to be done before destroying history.
2968 place for anything that has to be done before destroying history.
2968
2969
2969 This is mostly useful for saving state that is in memory and waiting
2970 This is mostly useful for saving state that is in memory and waiting
2970 to be flushed when the current lock is released. Because a call to
2971 to be flushed when the current lock is released. Because a call to
2971 destroyed is imminent, the repo will be invalidated causing those
2972 destroyed is imminent, the repo will be invalidated causing those
2972 changes to stay in memory (waiting for the next unlock), or vanish
2973 changes to stay in memory (waiting for the next unlock), or vanish
2973 completely.
2974 completely.
2974 '''
2975 '''
2975 # When using the same lock to commit and strip, the phasecache is left
2976 # When using the same lock to commit and strip, the phasecache is left
2976 # dirty after committing. Then when we strip, the repo is invalidated,
2977 # dirty after committing. Then when we strip, the repo is invalidated,
2977 # causing those changes to disappear.
2978 # causing those changes to disappear.
2978 if '_phasecache' in vars(self):
2979 if '_phasecache' in vars(self):
2979 self._phasecache.write()
2980 self._phasecache.write()
2980
2981
2981 @unfilteredmethod
2982 @unfilteredmethod
2982 def destroyed(self):
2983 def destroyed(self):
2983 '''Inform the repository that nodes have been destroyed.
2984 '''Inform the repository that nodes have been destroyed.
2984 Intended for use by strip and rollback, so there's a common
2985 Intended for use by strip and rollback, so there's a common
2985 place for anything that has to be done after destroying history.
2986 place for anything that has to be done after destroying history.
2986 '''
2987 '''
2987 # When one tries to:
2988 # When one tries to:
2988 # 1) destroy nodes thus calling this method (e.g. strip)
2989 # 1) destroy nodes thus calling this method (e.g. strip)
2989 # 2) use phasecache somewhere (e.g. commit)
2990 # 2) use phasecache somewhere (e.g. commit)
2990 #
2991 #
2991 # then 2) will fail because the phasecache contains nodes that were
2992 # then 2) will fail because the phasecache contains nodes that were
2992 # removed. We can either remove phasecache from the filecache,
2993 # removed. We can either remove phasecache from the filecache,
2993 # causing it to reload next time it is accessed, or simply filter
2994 # causing it to reload next time it is accessed, or simply filter
2994 # the removed nodes now and write the updated cache.
2995 # the removed nodes now and write the updated cache.
2995 self._phasecache.filterunknown(self)
2996 self._phasecache.filterunknown(self)
2996 self._phasecache.write()
2997 self._phasecache.write()
2997
2998
2998 # refresh all repository caches
2999 # refresh all repository caches
2999 self.updatecaches()
3000 self.updatecaches()
3000
3001
3001 # Ensure the persistent tag cache is updated. Doing it now
3002 # Ensure the persistent tag cache is updated. Doing it now
3002 # means that the tag cache only has to worry about destroyed
3003 # means that the tag cache only has to worry about destroyed
3003 # heads immediately after a strip/rollback. That in turn
3004 # heads immediately after a strip/rollback. That in turn
3004 # guarantees that "cachetip == currenttip" (comparing both rev
3005 # guarantees that "cachetip == currenttip" (comparing both rev
3005 # and node) always means no nodes have been added or destroyed.
3006 # and node) always means no nodes have been added or destroyed.
3006
3007
3007 # XXX this is suboptimal when qrefresh'ing: we strip the current
3008 # XXX this is suboptimal when qrefresh'ing: we strip the current
3008 # head, refresh the tag cache, then immediately add a new head.
3009 # head, refresh the tag cache, then immediately add a new head.
3009 # But I think doing it this way is necessary for the "instant
3010 # But I think doing it this way is necessary for the "instant
3010 # tag cache retrieval" case to work.
3011 # tag cache retrieval" case to work.
3011 self.invalidate()
3012 self.invalidate()
3012
3013
3013 def status(
3014 def status(
3014 self,
3015 self,
3015 node1=b'.',
3016 node1=b'.',
3016 node2=None,
3017 node2=None,
3017 match=None,
3018 match=None,
3018 ignored=False,
3019 ignored=False,
3019 clean=False,
3020 clean=False,
3020 unknown=False,
3021 unknown=False,
3021 listsubrepos=False,
3022 listsubrepos=False,
3022 ):
3023 ):
3023 '''a convenience method that calls node1.status(node2)'''
3024 '''a convenience method that calls node1.status(node2)'''
3024 return self[node1].status(
3025 return self[node1].status(
3025 node2, match, ignored, clean, unknown, listsubrepos
3026 node2, match, ignored, clean, unknown, listsubrepos
3026 )
3027 )
3027
3028
3028 def addpostdsstatus(self, ps):
3029 def addpostdsstatus(self, ps):
3029 """Add a callback to run within the wlock, at the point at which status
3030 """Add a callback to run within the wlock, at the point at which status
3030 fixups happen.
3031 fixups happen.
3031
3032
3032 On status completion, callback(wctx, status) will be called with the
3033 On status completion, callback(wctx, status) will be called with the
3033 wlock held, unless the dirstate has changed from underneath or the wlock
3034 wlock held, unless the dirstate has changed from underneath or the wlock
3034 couldn't be grabbed.
3035 couldn't be grabbed.
3035
3036
3036 Callbacks should not capture and use a cached copy of the dirstate --
3037 Callbacks should not capture and use a cached copy of the dirstate --
3037 it might change in the meanwhile. Instead, they should access the
3038 it might change in the meanwhile. Instead, they should access the
3038 dirstate via wctx.repo().dirstate.
3039 dirstate via wctx.repo().dirstate.
3039
3040
3040 This list is emptied out after each status run -- extensions should
3041 This list is emptied out after each status run -- extensions should
3041 make sure it adds to this list each time dirstate.status is called.
3042 make sure it adds to this list each time dirstate.status is called.
3042 Extensions should also make sure they don't call this for statuses
3043 Extensions should also make sure they don't call this for statuses
3043 that don't involve the dirstate.
3044 that don't involve the dirstate.
3044 """
3045 """
3045
3046
3046 # The list is located here for uniqueness reasons -- it is actually
3047 # The list is located here for uniqueness reasons -- it is actually
3047 # managed by the workingctx, but that isn't unique per-repo.
3048 # managed by the workingctx, but that isn't unique per-repo.
3048 self._postdsstatus.append(ps)
3049 self._postdsstatus.append(ps)
3049
3050
3050 def postdsstatus(self):
3051 def postdsstatus(self):
3051 """Used by workingctx to get the list of post-dirstate-status hooks."""
3052 """Used by workingctx to get the list of post-dirstate-status hooks."""
3052 return self._postdsstatus
3053 return self._postdsstatus
3053
3054
3054 def clearpostdsstatus(self):
3055 def clearpostdsstatus(self):
3055 """Used by workingctx to clear post-dirstate-status hooks."""
3056 """Used by workingctx to clear post-dirstate-status hooks."""
3056 del self._postdsstatus[:]
3057 del self._postdsstatus[:]
3057
3058
3058 def heads(self, start=None):
3059 def heads(self, start=None):
3059 if start is None:
3060 if start is None:
3060 cl = self.changelog
3061 cl = self.changelog
3061 headrevs = reversed(cl.headrevs())
3062 headrevs = reversed(cl.headrevs())
3062 return [cl.node(rev) for rev in headrevs]
3063 return [cl.node(rev) for rev in headrevs]
3063
3064
3064 heads = self.changelog.heads(start)
3065 heads = self.changelog.heads(start)
3065 # sort the output in rev descending order
3066 # sort the output in rev descending order
3066 return sorted(heads, key=self.changelog.rev, reverse=True)
3067 return sorted(heads, key=self.changelog.rev, reverse=True)
3067
3068
3068 def branchheads(self, branch=None, start=None, closed=False):
3069 def branchheads(self, branch=None, start=None, closed=False):
3069 '''return a (possibly filtered) list of heads for the given branch
3070 '''return a (possibly filtered) list of heads for the given branch
3070
3071
3071 Heads are returned in topological order, from newest to oldest.
3072 Heads are returned in topological order, from newest to oldest.
3072 If branch is None, use the dirstate branch.
3073 If branch is None, use the dirstate branch.
3073 If start is not None, return only heads reachable from start.
3074 If start is not None, return only heads reachable from start.
3074 If closed is True, return heads that are marked as closed as well.
3075 If closed is True, return heads that are marked as closed as well.
3075 '''
3076 '''
3076 if branch is None:
3077 if branch is None:
3077 branch = self[None].branch()
3078 branch = self[None].branch()
3078 branches = self.branchmap()
3079 branches = self.branchmap()
3079 if not branches.hasbranch(branch):
3080 if not branches.hasbranch(branch):
3080 return []
3081 return []
3081 # the cache returns heads ordered lowest to highest
3082 # the cache returns heads ordered lowest to highest
3082 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
3083 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
3083 if start is not None:
3084 if start is not None:
3084 # filter out the heads that cannot be reached from startrev
3085 # filter out the heads that cannot be reached from startrev
3085 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
3086 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
3086 bheads = [h for h in bheads if h in fbheads]
3087 bheads = [h for h in bheads if h in fbheads]
3087 return bheads
3088 return bheads
3088
3089
3089 def branches(self, nodes):
3090 def branches(self, nodes):
3090 if not nodes:
3091 if not nodes:
3091 nodes = [self.changelog.tip()]
3092 nodes = [self.changelog.tip()]
3092 b = []
3093 b = []
3093 for n in nodes:
3094 for n in nodes:
3094 t = n
3095 t = n
3095 while True:
3096 while True:
3096 p = self.changelog.parents(n)
3097 p = self.changelog.parents(n)
3097 if p[1] != nullid or p[0] == nullid:
3098 if p[1] != nullid or p[0] == nullid:
3098 b.append((t, n, p[0], p[1]))
3099 b.append((t, n, p[0], p[1]))
3099 break
3100 break
3100 n = p[0]
3101 n = p[0]
3101 return b
3102 return b
3102
3103
3103 def between(self, pairs):
3104 def between(self, pairs):
3104 r = []
3105 r = []
3105
3106
3106 for top, bottom in pairs:
3107 for top, bottom in pairs:
3107 n, l, i = top, [], 0
3108 n, l, i = top, [], 0
3108 f = 1
3109 f = 1
3109
3110
3110 while n != bottom and n != nullid:
3111 while n != bottom and n != nullid:
3111 p = self.changelog.parents(n)[0]
3112 p = self.changelog.parents(n)[0]
3112 if i == f:
3113 if i == f:
3113 l.append(n)
3114 l.append(n)
3114 f = f * 2
3115 f = f * 2
3115 n = p
3116 n = p
3116 i += 1
3117 i += 1
3117
3118
3118 r.append(l)
3119 r.append(l)
3119
3120
3120 return r
3121 return r
3121
3122
3122 def checkpush(self, pushop):
3123 def checkpush(self, pushop):
3123 """Extensions can override this function if additional checks have
3124 """Extensions can override this function if additional checks have
3124 to be performed before pushing, or call it if they override push
3125 to be performed before pushing, or call it if they override push
3125 command.
3126 command.
3126 """
3127 """
3127
3128
3128 @unfilteredpropertycache
3129 @unfilteredpropertycache
3129 def prepushoutgoinghooks(self):
3130 def prepushoutgoinghooks(self):
3130 """Return util.hooks consists of a pushop with repo, remote, outgoing
3131 """Return util.hooks consists of a pushop with repo, remote, outgoing
3131 methods, which are called before pushing changesets.
3132 methods, which are called before pushing changesets.
3132 """
3133 """
3133 return util.hooks()
3134 return util.hooks()
3134
3135
3135 def pushkey(self, namespace, key, old, new):
3136 def pushkey(self, namespace, key, old, new):
3136 try:
3137 try:
3137 tr = self.currenttransaction()
3138 tr = self.currenttransaction()
3138 hookargs = {}
3139 hookargs = {}
3139 if tr is not None:
3140 if tr is not None:
3140 hookargs.update(tr.hookargs)
3141 hookargs.update(tr.hookargs)
3141 hookargs = pycompat.strkwargs(hookargs)
3142 hookargs = pycompat.strkwargs(hookargs)
3142 hookargs['namespace'] = namespace
3143 hookargs['namespace'] = namespace
3143 hookargs['key'] = key
3144 hookargs['key'] = key
3144 hookargs['old'] = old
3145 hookargs['old'] = old
3145 hookargs['new'] = new
3146 hookargs['new'] = new
3146 self.hook(b'prepushkey', throw=True, **hookargs)
3147 self.hook(b'prepushkey', throw=True, **hookargs)
3147 except error.HookAbort as exc:
3148 except error.HookAbort as exc:
3148 self.ui.write_err(_(b"pushkey-abort: %s\n") % exc)
3149 self.ui.write_err(_(b"pushkey-abort: %s\n") % exc)
3149 if exc.hint:
3150 if exc.hint:
3150 self.ui.write_err(_(b"(%s)\n") % exc.hint)
3151 self.ui.write_err(_(b"(%s)\n") % exc.hint)
3151 return False
3152 return False
3152 self.ui.debug(b'pushing key for "%s:%s"\n' % (namespace, key))
3153 self.ui.debug(b'pushing key for "%s:%s"\n' % (namespace, key))
3153 ret = pushkey.push(self, namespace, key, old, new)
3154 ret = pushkey.push(self, namespace, key, old, new)
3154
3155
3155 def runhook(unused_success):
3156 def runhook(unused_success):
3156 self.hook(
3157 self.hook(
3157 b'pushkey',
3158 b'pushkey',
3158 namespace=namespace,
3159 namespace=namespace,
3159 key=key,
3160 key=key,
3160 old=old,
3161 old=old,
3161 new=new,
3162 new=new,
3162 ret=ret,
3163 ret=ret,
3163 )
3164 )
3164
3165
3165 self._afterlock(runhook)
3166 self._afterlock(runhook)
3166 return ret
3167 return ret
3167
3168
3168 def listkeys(self, namespace):
3169 def listkeys(self, namespace):
3169 self.hook(b'prelistkeys', throw=True, namespace=namespace)
3170 self.hook(b'prelistkeys', throw=True, namespace=namespace)
3170 self.ui.debug(b'listing keys for "%s"\n' % namespace)
3171 self.ui.debug(b'listing keys for "%s"\n' % namespace)
3171 values = pushkey.list(self, namespace)
3172 values = pushkey.list(self, namespace)
3172 self.hook(b'listkeys', namespace=namespace, values=values)
3173 self.hook(b'listkeys', namespace=namespace, values=values)
3173 return values
3174 return values
3174
3175
3175 def debugwireargs(self, one, two, three=None, four=None, five=None):
3176 def debugwireargs(self, one, two, three=None, four=None, five=None):
3176 '''used to test argument passing over the wire'''
3177 '''used to test argument passing over the wire'''
3177 return b"%s %s %s %s %s" % (
3178 return b"%s %s %s %s %s" % (
3178 one,
3179 one,
3179 two,
3180 two,
3180 pycompat.bytestr(three),
3181 pycompat.bytestr(three),
3181 pycompat.bytestr(four),
3182 pycompat.bytestr(four),
3182 pycompat.bytestr(five),
3183 pycompat.bytestr(five),
3183 )
3184 )
3184
3185
3185 def savecommitmessage(self, text):
3186 def savecommitmessage(self, text):
3186 fp = self.vfs(b'last-message.txt', b'wb')
3187 fp = self.vfs(b'last-message.txt', b'wb')
3187 try:
3188 try:
3188 fp.write(text)
3189 fp.write(text)
3189 finally:
3190 finally:
3190 fp.close()
3191 fp.close()
3191 return self.pathto(fp.name[len(self.root) + 1 :])
3192 return self.pathto(fp.name[len(self.root) + 1 :])
3192
3193
3193
3194
3194 # used to avoid circular references so destructors work
3195 # used to avoid circular references so destructors work
3195 def aftertrans(files):
3196 def aftertrans(files):
3196 renamefiles = [tuple(t) for t in files]
3197 renamefiles = [tuple(t) for t in files]
3197
3198
3198 def a():
3199 def a():
3199 for vfs, src, dest in renamefiles:
3200 for vfs, src, dest in renamefiles:
3200 # if src and dest refer to a same file, vfs.rename is a no-op,
3201 # if src and dest refer to a same file, vfs.rename is a no-op,
3201 # leaving both src and dest on disk. delete dest to make sure
3202 # leaving both src and dest on disk. delete dest to make sure
3202 # the rename couldn't be such a no-op.
3203 # the rename couldn't be such a no-op.
3203 vfs.tryunlink(dest)
3204 vfs.tryunlink(dest)
3204 try:
3205 try:
3205 vfs.rename(src, dest)
3206 vfs.rename(src, dest)
3206 except OSError: # journal file does not yet exist
3207 except OSError: # journal file does not yet exist
3207 pass
3208 pass
3208
3209
3209 return a
3210 return a
3210
3211
3211
3212
3212 def undoname(fn):
3213 def undoname(fn):
3213 base, name = os.path.split(fn)
3214 base, name = os.path.split(fn)
3214 assert name.startswith(b'journal')
3215 assert name.startswith(b'journal')
3215 return os.path.join(base, name.replace(b'journal', b'undo', 1))
3216 return os.path.join(base, name.replace(b'journal', b'undo', 1))
3216
3217
3217
3218
3218 def instance(ui, path, create, intents=None, createopts=None):
3219 def instance(ui, path, create, intents=None, createopts=None):
3219 localpath = util.urllocalpath(path)
3220 localpath = util.urllocalpath(path)
3220 if create:
3221 if create:
3221 createrepository(ui, localpath, createopts=createopts)
3222 createrepository(ui, localpath, createopts=createopts)
3222
3223
3223 return makelocalrepository(ui, localpath, intents=intents)
3224 return makelocalrepository(ui, localpath, intents=intents)
3224
3225
3225
3226
3226 def islocal(path):
3227 def islocal(path):
3227 return True
3228 return True
3228
3229
3229
3230
3230 def defaultcreateopts(ui, createopts=None):
3231 def defaultcreateopts(ui, createopts=None):
3231 """Populate the default creation options for a repository.
3232 """Populate the default creation options for a repository.
3232
3233
3233 A dictionary of explicitly requested creation options can be passed
3234 A dictionary of explicitly requested creation options can be passed
3234 in. Missing keys will be populated.
3235 in. Missing keys will be populated.
3235 """
3236 """
3236 createopts = dict(createopts or {})
3237 createopts = dict(createopts or {})
3237
3238
3238 if b'backend' not in createopts:
3239 if b'backend' not in createopts:
3239 # experimental config: storage.new-repo-backend
3240 # experimental config: storage.new-repo-backend
3240 createopts[b'backend'] = ui.config(b'storage', b'new-repo-backend')
3241 createopts[b'backend'] = ui.config(b'storage', b'new-repo-backend')
3241
3242
3242 return createopts
3243 return createopts
3243
3244
3244
3245
3245 def newreporequirements(ui, createopts):
3246 def newreporequirements(ui, createopts):
3246 """Determine the set of requirements for a new local repository.
3247 """Determine the set of requirements for a new local repository.
3247
3248
3248 Extensions can wrap this function to specify custom requirements for
3249 Extensions can wrap this function to specify custom requirements for
3249 new repositories.
3250 new repositories.
3250 """
3251 """
3251 # If the repo is being created from a shared repository, we copy
3252 # If the repo is being created from a shared repository, we copy
3252 # its requirements.
3253 # its requirements.
3253 if b'sharedrepo' in createopts:
3254 if b'sharedrepo' in createopts:
3254 requirements = set(createopts[b'sharedrepo'].requirements)
3255 requirements = set(createopts[b'sharedrepo'].requirements)
3255 if createopts.get(b'sharedrelative'):
3256 if createopts.get(b'sharedrelative'):
3256 requirements.add(requirementsmod.RELATIVE_SHARED_REQUIREMENT)
3257 requirements.add(requirementsmod.RELATIVE_SHARED_REQUIREMENT)
3257 else:
3258 else:
3258 requirements.add(requirementsmod.SHARED_REQUIREMENT)
3259 requirements.add(requirementsmod.SHARED_REQUIREMENT)
3259
3260
3260 return requirements
3261 return requirements
3261
3262
3262 if b'backend' not in createopts:
3263 if b'backend' not in createopts:
3263 raise error.ProgrammingError(
3264 raise error.ProgrammingError(
3264 b'backend key not present in createopts; '
3265 b'backend key not present in createopts; '
3265 b'was defaultcreateopts() called?'
3266 b'was defaultcreateopts() called?'
3266 )
3267 )
3267
3268
3268 if createopts[b'backend'] != b'revlogv1':
3269 if createopts[b'backend'] != b'revlogv1':
3269 raise error.Abort(
3270 raise error.Abort(
3270 _(
3271 _(
3271 b'unable to determine repository requirements for '
3272 b'unable to determine repository requirements for '
3272 b'storage backend: %s'
3273 b'storage backend: %s'
3273 )
3274 )
3274 % createopts[b'backend']
3275 % createopts[b'backend']
3275 )
3276 )
3276
3277
3277 requirements = {b'revlogv1'}
3278 requirements = {b'revlogv1'}
3278 if ui.configbool(b'format', b'usestore'):
3279 if ui.configbool(b'format', b'usestore'):
3279 requirements.add(b'store')
3280 requirements.add(b'store')
3280 if ui.configbool(b'format', b'usefncache'):
3281 if ui.configbool(b'format', b'usefncache'):
3281 requirements.add(b'fncache')
3282 requirements.add(b'fncache')
3282 if ui.configbool(b'format', b'dotencode'):
3283 if ui.configbool(b'format', b'dotencode'):
3283 requirements.add(b'dotencode')
3284 requirements.add(b'dotencode')
3284
3285
3285 compengines = ui.configlist(b'format', b'revlog-compression')
3286 compengines = ui.configlist(b'format', b'revlog-compression')
3286 for compengine in compengines:
3287 for compengine in compengines:
3287 if compengine in util.compengines:
3288 if compengine in util.compengines:
3288 break
3289 break
3289 else:
3290 else:
3290 raise error.Abort(
3291 raise error.Abort(
3291 _(
3292 _(
3292 b'compression engines %s defined by '
3293 b'compression engines %s defined by '
3293 b'format.revlog-compression not available'
3294 b'format.revlog-compression not available'
3294 )
3295 )
3295 % b', '.join(b'"%s"' % e for e in compengines),
3296 % b', '.join(b'"%s"' % e for e in compengines),
3296 hint=_(
3297 hint=_(
3297 b'run "hg debuginstall" to list available '
3298 b'run "hg debuginstall" to list available '
3298 b'compression engines'
3299 b'compression engines'
3299 ),
3300 ),
3300 )
3301 )
3301
3302
3302 # zlib is the historical default and doesn't need an explicit requirement.
3303 # zlib is the historical default and doesn't need an explicit requirement.
3303 if compengine == b'zstd':
3304 if compengine == b'zstd':
3304 requirements.add(b'revlog-compression-zstd')
3305 requirements.add(b'revlog-compression-zstd')
3305 elif compengine != b'zlib':
3306 elif compengine != b'zlib':
3306 requirements.add(b'exp-compression-%s' % compengine)
3307 requirements.add(b'exp-compression-%s' % compengine)
3307
3308
3308 if scmutil.gdinitconfig(ui):
3309 if scmutil.gdinitconfig(ui):
3309 requirements.add(b'generaldelta')
3310 requirements.add(b'generaldelta')
3310 if ui.configbool(b'format', b'sparse-revlog'):
3311 if ui.configbool(b'format', b'sparse-revlog'):
3311 requirements.add(requirementsmod.SPARSEREVLOG_REQUIREMENT)
3312 requirements.add(requirementsmod.SPARSEREVLOG_REQUIREMENT)
3312
3313
3313 # experimental config: format.exp-use-side-data
3314 # experimental config: format.exp-use-side-data
3314 if ui.configbool(b'format', b'exp-use-side-data'):
3315 if ui.configbool(b'format', b'exp-use-side-data'):
3315 requirements.add(requirementsmod.SIDEDATA_REQUIREMENT)
3316 requirements.add(requirementsmod.SIDEDATA_REQUIREMENT)
3316 # experimental config: format.exp-use-copies-side-data-changeset
3317 # experimental config: format.exp-use-copies-side-data-changeset
3317 if ui.configbool(b'format', b'exp-use-copies-side-data-changeset'):
3318 if ui.configbool(b'format', b'exp-use-copies-side-data-changeset'):
3318 requirements.add(requirementsmod.SIDEDATA_REQUIREMENT)
3319 requirements.add(requirementsmod.SIDEDATA_REQUIREMENT)
3319 requirements.add(requirementsmod.COPIESSDC_REQUIREMENT)
3320 requirements.add(requirementsmod.COPIESSDC_REQUIREMENT)
3320 if ui.configbool(b'experimental', b'treemanifest'):
3321 if ui.configbool(b'experimental', b'treemanifest'):
3321 requirements.add(requirementsmod.TREEMANIFEST_REQUIREMENT)
3322 requirements.add(requirementsmod.TREEMANIFEST_REQUIREMENT)
3322
3323
3323 revlogv2 = ui.config(b'experimental', b'revlogv2')
3324 revlogv2 = ui.config(b'experimental', b'revlogv2')
3324 if revlogv2 == b'enable-unstable-format-and-corrupt-my-data':
3325 if revlogv2 == b'enable-unstable-format-and-corrupt-my-data':
3325 requirements.remove(b'revlogv1')
3326 requirements.remove(b'revlogv1')
3326 # generaldelta is implied by revlogv2.
3327 # generaldelta is implied by revlogv2.
3327 requirements.discard(b'generaldelta')
3328 requirements.discard(b'generaldelta')
3328 requirements.add(requirementsmod.REVLOGV2_REQUIREMENT)
3329 requirements.add(requirementsmod.REVLOGV2_REQUIREMENT)
3329 # experimental config: format.internal-phase
3330 # experimental config: format.internal-phase
3330 if ui.configbool(b'format', b'internal-phase'):
3331 if ui.configbool(b'format', b'internal-phase'):
3331 requirements.add(requirementsmod.INTERNAL_PHASE_REQUIREMENT)
3332 requirements.add(requirementsmod.INTERNAL_PHASE_REQUIREMENT)
3332
3333
3333 if createopts.get(b'narrowfiles'):
3334 if createopts.get(b'narrowfiles'):
3334 requirements.add(requirementsmod.NARROW_REQUIREMENT)
3335 requirements.add(requirementsmod.NARROW_REQUIREMENT)
3335
3336
3336 if createopts.get(b'lfs'):
3337 if createopts.get(b'lfs'):
3337 requirements.add(b'lfs')
3338 requirements.add(b'lfs')
3338
3339
3339 if ui.configbool(b'format', b'bookmarks-in-store'):
3340 if ui.configbool(b'format', b'bookmarks-in-store'):
3340 requirements.add(bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT)
3341 requirements.add(bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT)
3341
3342
3342 if ui.configbool(b'format', b'use-persistent-nodemap'):
3343 if ui.configbool(b'format', b'use-persistent-nodemap'):
3343 requirements.add(requirementsmod.NODEMAP_REQUIREMENT)
3344 requirements.add(requirementsmod.NODEMAP_REQUIREMENT)
3344
3345
3345 # if share-safe is enabled, let's create the new repository with the new
3346 # if share-safe is enabled, let's create the new repository with the new
3346 # requirement
3347 # requirement
3347 if ui.configbool(b'format', b'exp-share-safe'):
3348 if ui.configbool(b'format', b'exp-share-safe'):
3348 requirements.add(requirementsmod.SHARESAFE_REQUIREMENT)
3349 requirements.add(requirementsmod.SHARESAFE_REQUIREMENT)
3349
3350
3350 return requirements
3351 return requirements
3351
3352
3352
3353
3353 def checkrequirementscompat(ui, requirements):
3354 def checkrequirementscompat(ui, requirements):
3354 """ Checks compatibility of repository requirements enabled and disabled.
3355 """ Checks compatibility of repository requirements enabled and disabled.
3355
3356
3356 Returns a set of requirements which needs to be dropped because dependend
3357 Returns a set of requirements which needs to be dropped because dependend
3357 requirements are not enabled. Also warns users about it """
3358 requirements are not enabled. Also warns users about it """
3358
3359
3359 dropped = set()
3360 dropped = set()
3360
3361
3361 if b'store' not in requirements:
3362 if b'store' not in requirements:
3362 if bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT in requirements:
3363 if bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT in requirements:
3363 ui.warn(
3364 ui.warn(
3364 _(
3365 _(
3365 b'ignoring enabled \'format.bookmarks-in-store\' config '
3366 b'ignoring enabled \'format.bookmarks-in-store\' config '
3366 b'beacuse it is incompatible with disabled '
3367 b'beacuse it is incompatible with disabled '
3367 b'\'format.usestore\' config\n'
3368 b'\'format.usestore\' config\n'
3368 )
3369 )
3369 )
3370 )
3370 dropped.add(bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT)
3371 dropped.add(bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT)
3371
3372
3372 if (
3373 if (
3373 requirementsmod.SHARED_REQUIREMENT in requirements
3374 requirementsmod.SHARED_REQUIREMENT in requirements
3374 or requirementsmod.RELATIVE_SHARED_REQUIREMENT in requirements
3375 or requirementsmod.RELATIVE_SHARED_REQUIREMENT in requirements
3375 ):
3376 ):
3376 raise error.Abort(
3377 raise error.Abort(
3377 _(
3378 _(
3378 b"cannot create shared repository as source was created"
3379 b"cannot create shared repository as source was created"
3379 b" with 'format.usestore' config disabled"
3380 b" with 'format.usestore' config disabled"
3380 )
3381 )
3381 )
3382 )
3382
3383
3383 if requirementsmod.SHARESAFE_REQUIREMENT in requirements:
3384 if requirementsmod.SHARESAFE_REQUIREMENT in requirements:
3384 ui.warn(
3385 ui.warn(
3385 _(
3386 _(
3386 b"ignoring enabled 'format.exp-share-safe' config because "
3387 b"ignoring enabled 'format.exp-share-safe' config because "
3387 b"it is incompatible with disabled 'format.usestore'"
3388 b"it is incompatible with disabled 'format.usestore'"
3388 b" config\n"
3389 b" config\n"
3389 )
3390 )
3390 )
3391 )
3391 dropped.add(requirementsmod.SHARESAFE_REQUIREMENT)
3392 dropped.add(requirementsmod.SHARESAFE_REQUIREMENT)
3392
3393
3393 return dropped
3394 return dropped
3394
3395
3395
3396
3396 def filterknowncreateopts(ui, createopts):
3397 def filterknowncreateopts(ui, createopts):
3397 """Filters a dict of repo creation options against options that are known.
3398 """Filters a dict of repo creation options against options that are known.
3398
3399
3399 Receives a dict of repo creation options and returns a dict of those
3400 Receives a dict of repo creation options and returns a dict of those
3400 options that we don't know how to handle.
3401 options that we don't know how to handle.
3401
3402
3402 This function is called as part of repository creation. If the
3403 This function is called as part of repository creation. If the
3403 returned dict contains any items, repository creation will not
3404 returned dict contains any items, repository creation will not
3404 be allowed, as it means there was a request to create a repository
3405 be allowed, as it means there was a request to create a repository
3405 with options not recognized by loaded code.
3406 with options not recognized by loaded code.
3406
3407
3407 Extensions can wrap this function to filter out creation options
3408 Extensions can wrap this function to filter out creation options
3408 they know how to handle.
3409 they know how to handle.
3409 """
3410 """
3410 known = {
3411 known = {
3411 b'backend',
3412 b'backend',
3412 b'lfs',
3413 b'lfs',
3413 b'narrowfiles',
3414 b'narrowfiles',
3414 b'sharedrepo',
3415 b'sharedrepo',
3415 b'sharedrelative',
3416 b'sharedrelative',
3416 b'shareditems',
3417 b'shareditems',
3417 b'shallowfilestore',
3418 b'shallowfilestore',
3418 }
3419 }
3419
3420
3420 return {k: v for k, v in createopts.items() if k not in known}
3421 return {k: v for k, v in createopts.items() if k not in known}
3421
3422
3422
3423
3423 def createrepository(ui, path, createopts=None):
3424 def createrepository(ui, path, createopts=None):
3424 """Create a new repository in a vfs.
3425 """Create a new repository in a vfs.
3425
3426
3426 ``path`` path to the new repo's working directory.
3427 ``path`` path to the new repo's working directory.
3427 ``createopts`` options for the new repository.
3428 ``createopts`` options for the new repository.
3428
3429
3429 The following keys for ``createopts`` are recognized:
3430 The following keys for ``createopts`` are recognized:
3430
3431
3431 backend
3432 backend
3432 The storage backend to use.
3433 The storage backend to use.
3433 lfs
3434 lfs
3434 Repository will be created with ``lfs`` requirement. The lfs extension
3435 Repository will be created with ``lfs`` requirement. The lfs extension
3435 will automatically be loaded when the repository is accessed.
3436 will automatically be loaded when the repository is accessed.
3436 narrowfiles
3437 narrowfiles
3437 Set up repository to support narrow file storage.
3438 Set up repository to support narrow file storage.
3438 sharedrepo
3439 sharedrepo
3439 Repository object from which storage should be shared.
3440 Repository object from which storage should be shared.
3440 sharedrelative
3441 sharedrelative
3441 Boolean indicating if the path to the shared repo should be
3442 Boolean indicating if the path to the shared repo should be
3442 stored as relative. By default, the pointer to the "parent" repo
3443 stored as relative. By default, the pointer to the "parent" repo
3443 is stored as an absolute path.
3444 is stored as an absolute path.
3444 shareditems
3445 shareditems
3445 Set of items to share to the new repository (in addition to storage).
3446 Set of items to share to the new repository (in addition to storage).
3446 shallowfilestore
3447 shallowfilestore
3447 Indicates that storage for files should be shallow (not all ancestor
3448 Indicates that storage for files should be shallow (not all ancestor
3448 revisions are known).
3449 revisions are known).
3449 """
3450 """
3450 createopts = defaultcreateopts(ui, createopts=createopts)
3451 createopts = defaultcreateopts(ui, createopts=createopts)
3451
3452
3452 unknownopts = filterknowncreateopts(ui, createopts)
3453 unknownopts = filterknowncreateopts(ui, createopts)
3453
3454
3454 if not isinstance(unknownopts, dict):
3455 if not isinstance(unknownopts, dict):
3455 raise error.ProgrammingError(
3456 raise error.ProgrammingError(
3456 b'filterknowncreateopts() did not return a dict'
3457 b'filterknowncreateopts() did not return a dict'
3457 )
3458 )
3458
3459
3459 if unknownopts:
3460 if unknownopts:
3460 raise error.Abort(
3461 raise error.Abort(
3461 _(
3462 _(
3462 b'unable to create repository because of unknown '
3463 b'unable to create repository because of unknown '
3463 b'creation option: %s'
3464 b'creation option: %s'
3464 )
3465 )
3465 % b', '.join(sorted(unknownopts)),
3466 % b', '.join(sorted(unknownopts)),
3466 hint=_(b'is a required extension not loaded?'),
3467 hint=_(b'is a required extension not loaded?'),
3467 )
3468 )
3468
3469
3469 requirements = newreporequirements(ui, createopts=createopts)
3470 requirements = newreporequirements(ui, createopts=createopts)
3470 requirements -= checkrequirementscompat(ui, requirements)
3471 requirements -= checkrequirementscompat(ui, requirements)
3471
3472
3472 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
3473 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
3473
3474
3474 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
3475 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
3475 if hgvfs.exists():
3476 if hgvfs.exists():
3476 raise error.RepoError(_(b'repository %s already exists') % path)
3477 raise error.RepoError(_(b'repository %s already exists') % path)
3477
3478
3478 if b'sharedrepo' in createopts:
3479 if b'sharedrepo' in createopts:
3479 sharedpath = createopts[b'sharedrepo'].sharedpath
3480 sharedpath = createopts[b'sharedrepo'].sharedpath
3480
3481
3481 if createopts.get(b'sharedrelative'):
3482 if createopts.get(b'sharedrelative'):
3482 try:
3483 try:
3483 sharedpath = os.path.relpath(sharedpath, hgvfs.base)
3484 sharedpath = os.path.relpath(sharedpath, hgvfs.base)
3484 except (IOError, ValueError) as e:
3485 except (IOError, ValueError) as e:
3485 # ValueError is raised on Windows if the drive letters differ
3486 # ValueError is raised on Windows if the drive letters differ
3486 # on each path.
3487 # on each path.
3487 raise error.Abort(
3488 raise error.Abort(
3488 _(b'cannot calculate relative path'),
3489 _(b'cannot calculate relative path'),
3489 hint=stringutil.forcebytestr(e),
3490 hint=stringutil.forcebytestr(e),
3490 )
3491 )
3491
3492
3492 if not wdirvfs.exists():
3493 if not wdirvfs.exists():
3493 wdirvfs.makedirs()
3494 wdirvfs.makedirs()
3494
3495
3495 hgvfs.makedir(notindexed=True)
3496 hgvfs.makedir(notindexed=True)
3496 if b'sharedrepo' not in createopts:
3497 if b'sharedrepo' not in createopts:
3497 hgvfs.mkdir(b'cache')
3498 hgvfs.mkdir(b'cache')
3498 hgvfs.mkdir(b'wcache')
3499 hgvfs.mkdir(b'wcache')
3499
3500
3500 if b'store' in requirements and b'sharedrepo' not in createopts:
3501 if b'store' in requirements and b'sharedrepo' not in createopts:
3501 hgvfs.mkdir(b'store')
3502 hgvfs.mkdir(b'store')
3502
3503
3503 # We create an invalid changelog outside the store so very old
3504 # We create an invalid changelog outside the store so very old
3504 # Mercurial versions (which didn't know about the requirements
3505 # Mercurial versions (which didn't know about the requirements
3505 # file) encounter an error on reading the changelog. This
3506 # file) encounter an error on reading the changelog. This
3506 # effectively locks out old clients and prevents them from
3507 # effectively locks out old clients and prevents them from
3507 # mucking with a repo in an unknown format.
3508 # mucking with a repo in an unknown format.
3508 #
3509 #
3509 # The revlog header has version 2, which won't be recognized by
3510 # The revlog header has version 2, which won't be recognized by
3510 # such old clients.
3511 # such old clients.
3511 hgvfs.append(
3512 hgvfs.append(
3512 b'00changelog.i',
3513 b'00changelog.i',
3513 b'\0\0\0\2 dummy changelog to prevent using the old repo '
3514 b'\0\0\0\2 dummy changelog to prevent using the old repo '
3514 b'layout',
3515 b'layout',
3515 )
3516 )
3516
3517
3517 # Filter the requirements into working copy and store ones
3518 # Filter the requirements into working copy and store ones
3518 wcreq, storereq = scmutil.filterrequirements(requirements)
3519 wcreq, storereq = scmutil.filterrequirements(requirements)
3519 # write working copy ones
3520 # write working copy ones
3520 scmutil.writerequires(hgvfs, wcreq)
3521 scmutil.writerequires(hgvfs, wcreq)
3521 # If there are store requirements and the current repository
3522 # If there are store requirements and the current repository
3522 # is not a shared one, write stored requirements
3523 # is not a shared one, write stored requirements
3523 # For new shared repository, we don't need to write the store
3524 # For new shared repository, we don't need to write the store
3524 # requirements as they are already present in store requires
3525 # requirements as they are already present in store requires
3525 if storereq and b'sharedrepo' not in createopts:
3526 if storereq and b'sharedrepo' not in createopts:
3526 storevfs = vfsmod.vfs(hgvfs.join(b'store'), cacheaudited=True)
3527 storevfs = vfsmod.vfs(hgvfs.join(b'store'), cacheaudited=True)
3527 scmutil.writerequires(storevfs, storereq)
3528 scmutil.writerequires(storevfs, storereq)
3528
3529
3529 # Write out file telling readers where to find the shared store.
3530 # Write out file telling readers where to find the shared store.
3530 if b'sharedrepo' in createopts:
3531 if b'sharedrepo' in createopts:
3531 hgvfs.write(b'sharedpath', sharedpath)
3532 hgvfs.write(b'sharedpath', sharedpath)
3532
3533
3533 if createopts.get(b'shareditems'):
3534 if createopts.get(b'shareditems'):
3534 shared = b'\n'.join(sorted(createopts[b'shareditems'])) + b'\n'
3535 shared = b'\n'.join(sorted(createopts[b'shareditems'])) + b'\n'
3535 hgvfs.write(b'shared', shared)
3536 hgvfs.write(b'shared', shared)
3536
3537
3537
3538
3538 def poisonrepository(repo):
3539 def poisonrepository(repo):
3539 """Poison a repository instance so it can no longer be used."""
3540 """Poison a repository instance so it can no longer be used."""
3540 # Perform any cleanup on the instance.
3541 # Perform any cleanup on the instance.
3541 repo.close()
3542 repo.close()
3542
3543
3543 # Our strategy is to replace the type of the object with one that
3544 # Our strategy is to replace the type of the object with one that
3544 # has all attribute lookups result in error.
3545 # has all attribute lookups result in error.
3545 #
3546 #
3546 # But we have to allow the close() method because some constructors
3547 # But we have to allow the close() method because some constructors
3547 # of repos call close() on repo references.
3548 # of repos call close() on repo references.
3548 class poisonedrepository(object):
3549 class poisonedrepository(object):
3549 def __getattribute__(self, item):
3550 def __getattribute__(self, item):
3550 if item == 'close':
3551 if item == 'close':
3551 return object.__getattribute__(self, item)
3552 return object.__getattribute__(self, item)
3552
3553
3553 raise error.ProgrammingError(
3554 raise error.ProgrammingError(
3554 b'repo instances should not be used after unshare'
3555 b'repo instances should not be used after unshare'
3555 )
3556 )
3556
3557
3557 def close(self):
3558 def close(self):
3558 pass
3559 pass
3559
3560
3560 # We may have a repoview, which intercepts __setattr__. So be sure
3561 # We may have a repoview, which intercepts __setattr__. So be sure
3561 # we operate at the lowest level possible.
3562 # we operate at the lowest level possible.
3562 object.__setattr__(repo, '__class__', poisonedrepository)
3563 object.__setattr__(repo, '__class__', poisonedrepository)
@@ -1,749 +1,749 b''
1 # wireprotov1server.py - Wire protocol version 1 server functionality
1 # wireprotov1server.py - Wire protocol version 1 server functionality
2 #
2 #
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import binascii
10 import binascii
11 import os
11 import os
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from .pycompat import getattr
18 from .pycompat import getattr
19
19
20 from . import (
20 from . import (
21 bundle2,
21 bundle2,
22 bundlecaches,
22 bundlecaches,
23 changegroup as changegroupmod,
23 changegroup as changegroupmod,
24 discovery,
24 discovery,
25 encoding,
25 encoding,
26 error,
26 error,
27 exchange,
27 exchange,
28 pushkey as pushkeymod,
28 pushkey as pushkeymod,
29 pycompat,
29 pycompat,
30 streamclone,
30 streamclone,
31 util,
31 util,
32 wireprototypes,
32 wireprototypes,
33 )
33 )
34
34
35 from .utils import (
35 from .utils import (
36 procutil,
36 procutil,
37 stringutil,
37 stringutil,
38 )
38 )
39
39
40 urlerr = util.urlerr
40 urlerr = util.urlerr
41 urlreq = util.urlreq
41 urlreq = util.urlreq
42
42
43 bundle2requiredmain = _(b'incompatible Mercurial client; bundle2 required')
43 bundle2requiredmain = _(b'incompatible Mercurial client; bundle2 required')
44 bundle2requiredhint = _(
44 bundle2requiredhint = _(
45 b'see https://www.mercurial-scm.org/wiki/IncompatibleClient'
45 b'see https://www.mercurial-scm.org/wiki/IncompatibleClient'
46 )
46 )
47 bundle2required = b'%s\n(%s)\n' % (bundle2requiredmain, bundle2requiredhint)
47 bundle2required = b'%s\n(%s)\n' % (bundle2requiredmain, bundle2requiredhint)
48
48
49
49
50 def clientcompressionsupport(proto):
50 def clientcompressionsupport(proto):
51 """Returns a list of compression methods supported by the client.
51 """Returns a list of compression methods supported by the client.
52
52
53 Returns a list of the compression methods supported by the client
53 Returns a list of the compression methods supported by the client
54 according to the protocol capabilities. If no such capability has
54 according to the protocol capabilities. If no such capability has
55 been announced, fallback to the default of zlib and uncompressed.
55 been announced, fallback to the default of zlib and uncompressed.
56 """
56 """
57 for cap in proto.getprotocaps():
57 for cap in proto.getprotocaps():
58 if cap.startswith(b'comp='):
58 if cap.startswith(b'comp='):
59 return cap[5:].split(b',')
59 return cap[5:].split(b',')
60 return [b'zlib', b'none']
60 return [b'zlib', b'none']
61
61
62
62
63 # wire protocol command can either return a string or one of these classes.
63 # wire protocol command can either return a string or one of these classes.
64
64
65
65
66 def getdispatchrepo(repo, proto, command):
66 def getdispatchrepo(repo, proto, command):
67 """Obtain the repo used for processing wire protocol commands.
67 """Obtain the repo used for processing wire protocol commands.
68
68
69 The intent of this function is to serve as a monkeypatch point for
69 The intent of this function is to serve as a monkeypatch point for
70 extensions that need commands to operate on different repo views under
70 extensions that need commands to operate on different repo views under
71 specialized circumstances.
71 specialized circumstances.
72 """
72 """
73 viewconfig = repo.ui.config(b'server', b'view')
73 viewconfig = repo.ui.config(b'server', b'view')
74 return repo.filtered(viewconfig)
74 return repo.filtered(viewconfig)
75
75
76
76
77 def dispatch(repo, proto, command):
77 def dispatch(repo, proto, command):
78 repo = getdispatchrepo(repo, proto, command)
78 repo = getdispatchrepo(repo, proto, command)
79
79
80 func, spec = commands[command]
80 func, spec = commands[command]
81 args = proto.getargs(spec)
81 args = proto.getargs(spec)
82
82
83 return func(repo, proto, *args)
83 return func(repo, proto, *args)
84
84
85
85
86 def options(cmd, keys, others):
86 def options(cmd, keys, others):
87 opts = {}
87 opts = {}
88 for k in keys:
88 for k in keys:
89 if k in others:
89 if k in others:
90 opts[k] = others[k]
90 opts[k] = others[k]
91 del others[k]
91 del others[k]
92 if others:
92 if others:
93 procutil.stderr.write(
93 procutil.stderr.write(
94 b"warning: %s ignored unexpected arguments %s\n"
94 b"warning: %s ignored unexpected arguments %s\n"
95 % (cmd, b",".join(others))
95 % (cmd, b",".join(others))
96 )
96 )
97 return opts
97 return opts
98
98
99
99
100 def bundle1allowed(repo, action):
100 def bundle1allowed(repo, action):
101 """Whether a bundle1 operation is allowed from the server.
101 """Whether a bundle1 operation is allowed from the server.
102
102
103 Priority is:
103 Priority is:
104
104
105 1. server.bundle1gd.<action> (if generaldelta active)
105 1. server.bundle1gd.<action> (if generaldelta active)
106 2. server.bundle1.<action>
106 2. server.bundle1.<action>
107 3. server.bundle1gd (if generaldelta active)
107 3. server.bundle1gd (if generaldelta active)
108 4. server.bundle1
108 4. server.bundle1
109 """
109 """
110 ui = repo.ui
110 ui = repo.ui
111 gd = b'generaldelta' in repo.requirements
111 gd = b'generaldelta' in repo.requirements
112
112
113 if gd:
113 if gd:
114 v = ui.configbool(b'server', b'bundle1gd.%s' % action)
114 v = ui.configbool(b'server', b'bundle1gd.%s' % action)
115 if v is not None:
115 if v is not None:
116 return v
116 return v
117
117
118 v = ui.configbool(b'server', b'bundle1.%s' % action)
118 v = ui.configbool(b'server', b'bundle1.%s' % action)
119 if v is not None:
119 if v is not None:
120 return v
120 return v
121
121
122 if gd:
122 if gd:
123 v = ui.configbool(b'server', b'bundle1gd')
123 v = ui.configbool(b'server', b'bundle1gd')
124 if v is not None:
124 if v is not None:
125 return v
125 return v
126
126
127 return ui.configbool(b'server', b'bundle1')
127 return ui.configbool(b'server', b'bundle1')
128
128
129
129
130 commands = wireprototypes.commanddict()
130 commands = wireprototypes.commanddict()
131
131
132
132
133 def wireprotocommand(name, args=None, permission=b'push'):
133 def wireprotocommand(name, args=None, permission=b'push'):
134 """Decorator to declare a wire protocol command.
134 """Decorator to declare a wire protocol command.
135
135
136 ``name`` is the name of the wire protocol command being provided.
136 ``name`` is the name of the wire protocol command being provided.
137
137
138 ``args`` defines the named arguments accepted by the command. It is
138 ``args`` defines the named arguments accepted by the command. It is
139 a space-delimited list of argument names. ``*`` denotes a special value
139 a space-delimited list of argument names. ``*`` denotes a special value
140 that says to accept all named arguments.
140 that says to accept all named arguments.
141
141
142 ``permission`` defines the permission type needed to run this command.
142 ``permission`` defines the permission type needed to run this command.
143 Can be ``push`` or ``pull``. These roughly map to read-write and read-only,
143 Can be ``push`` or ``pull``. These roughly map to read-write and read-only,
144 respectively. Default is to assume command requires ``push`` permissions
144 respectively. Default is to assume command requires ``push`` permissions
145 because otherwise commands not declaring their permissions could modify
145 because otherwise commands not declaring their permissions could modify
146 a repository that is supposed to be read-only.
146 a repository that is supposed to be read-only.
147 """
147 """
148 transports = {
148 transports = {
149 k for k, v in wireprototypes.TRANSPORTS.items() if v[b'version'] == 1
149 k for k, v in wireprototypes.TRANSPORTS.items() if v[b'version'] == 1
150 }
150 }
151
151
152 # Because SSHv2 is a mirror of SSHv1, we allow "batch" commands through to
152 # Because SSHv2 is a mirror of SSHv1, we allow "batch" commands through to
153 # SSHv2.
153 # SSHv2.
154 # TODO undo this hack when SSH is using the unified frame protocol.
154 # TODO undo this hack when SSH is using the unified frame protocol.
155 if name == b'batch':
155 if name == b'batch':
156 transports.add(wireprototypes.SSHV2)
156 transports.add(wireprototypes.SSHV2)
157
157
158 if permission not in (b'push', b'pull'):
158 if permission not in (b'push', b'pull'):
159 raise error.ProgrammingError(
159 raise error.ProgrammingError(
160 b'invalid wire protocol permission; '
160 b'invalid wire protocol permission; '
161 b'got %s; expected "push" or "pull"' % permission
161 b'got %s; expected "push" or "pull"' % permission
162 )
162 )
163
163
164 if args is None:
164 if args is None:
165 args = b''
165 args = b''
166
166
167 if not isinstance(args, bytes):
167 if not isinstance(args, bytes):
168 raise error.ProgrammingError(
168 raise error.ProgrammingError(
169 b'arguments for version 1 commands must be declared as bytes'
169 b'arguments for version 1 commands must be declared as bytes'
170 )
170 )
171
171
172 def register(func):
172 def register(func):
173 if name in commands:
173 if name in commands:
174 raise error.ProgrammingError(
174 raise error.ProgrammingError(
175 b'%s command already registered for version 1' % name
175 b'%s command already registered for version 1' % name
176 )
176 )
177 commands[name] = wireprototypes.commandentry(
177 commands[name] = wireprototypes.commandentry(
178 func, args=args, transports=transports, permission=permission
178 func, args=args, transports=transports, permission=permission
179 )
179 )
180
180
181 return func
181 return func
182
182
183 return register
183 return register
184
184
185
185
186 # TODO define a more appropriate permissions type to use for this.
186 # TODO define a more appropriate permissions type to use for this.
187 @wireprotocommand(b'batch', b'cmds *', permission=b'pull')
187 @wireprotocommand(b'batch', b'cmds *', permission=b'pull')
188 def batch(repo, proto, cmds, others):
188 def batch(repo, proto, cmds, others):
189 unescapearg = wireprototypes.unescapebatcharg
189 unescapearg = wireprototypes.unescapebatcharg
190 res = []
190 res = []
191 for pair in cmds.split(b';'):
191 for pair in cmds.split(b';'):
192 op, args = pair.split(b' ', 1)
192 op, args = pair.split(b' ', 1)
193 vals = {}
193 vals = {}
194 for a in args.split(b','):
194 for a in args.split(b','):
195 if a:
195 if a:
196 n, v = a.split(b'=')
196 n, v = a.split(b'=')
197 vals[unescapearg(n)] = unescapearg(v)
197 vals[unescapearg(n)] = unescapearg(v)
198 func, spec = commands[op]
198 func, spec = commands[op]
199
199
200 # Validate that client has permissions to perform this command.
200 # Validate that client has permissions to perform this command.
201 perm = commands[op].permission
201 perm = commands[op].permission
202 assert perm in (b'push', b'pull')
202 assert perm in (b'push', b'pull')
203 proto.checkperm(perm)
203 proto.checkperm(perm)
204
204
205 if spec:
205 if spec:
206 keys = spec.split()
206 keys = spec.split()
207 data = {}
207 data = {}
208 for k in keys:
208 for k in keys:
209 if k == b'*':
209 if k == b'*':
210 star = {}
210 star = {}
211 for key in vals.keys():
211 for key in vals.keys():
212 if key not in keys:
212 if key not in keys:
213 star[key] = vals[key]
213 star[key] = vals[key]
214 data[b'*'] = star
214 data[b'*'] = star
215 else:
215 else:
216 data[k] = vals[k]
216 data[k] = vals[k]
217 result = func(repo, proto, *[data[k] for k in keys])
217 result = func(repo, proto, *[data[k] for k in keys])
218 else:
218 else:
219 result = func(repo, proto)
219 result = func(repo, proto)
220 if isinstance(result, wireprototypes.ooberror):
220 if isinstance(result, wireprototypes.ooberror):
221 return result
221 return result
222
222
223 # For now, all batchable commands must return bytesresponse or
223 # For now, all batchable commands must return bytesresponse or
224 # raw bytes (for backwards compatibility).
224 # raw bytes (for backwards compatibility).
225 assert isinstance(result, (wireprototypes.bytesresponse, bytes))
225 assert isinstance(result, (wireprototypes.bytesresponse, bytes))
226 if isinstance(result, wireprototypes.bytesresponse):
226 if isinstance(result, wireprototypes.bytesresponse):
227 result = result.data
227 result = result.data
228 res.append(wireprototypes.escapebatcharg(result))
228 res.append(wireprototypes.escapebatcharg(result))
229
229
230 return wireprototypes.bytesresponse(b';'.join(res))
230 return wireprototypes.bytesresponse(b';'.join(res))
231
231
232
232
233 @wireprotocommand(b'between', b'pairs', permission=b'pull')
233 @wireprotocommand(b'between', b'pairs', permission=b'pull')
234 def between(repo, proto, pairs):
234 def between(repo, proto, pairs):
235 pairs = [wireprototypes.decodelist(p, b'-') for p in pairs.split(b" ")]
235 pairs = [wireprototypes.decodelist(p, b'-') for p in pairs.split(b" ")]
236 r = []
236 r = []
237 for b in repo.between(pairs):
237 for b in repo.between(pairs):
238 r.append(wireprototypes.encodelist(b) + b"\n")
238 r.append(wireprototypes.encodelist(b) + b"\n")
239
239
240 return wireprototypes.bytesresponse(b''.join(r))
240 return wireprototypes.bytesresponse(b''.join(r))
241
241
242
242
243 @wireprotocommand(b'branchmap', permission=b'pull')
243 @wireprotocommand(b'branchmap', permission=b'pull')
244 def branchmap(repo, proto):
244 def branchmap(repo, proto):
245 branchmap = repo.branchmap()
245 branchmap = repo.branchmap()
246 heads = []
246 heads = []
247 for branch, nodes in pycompat.iteritems(branchmap):
247 for branch, nodes in pycompat.iteritems(branchmap):
248 branchname = urlreq.quote(encoding.fromlocal(branch))
248 branchname = urlreq.quote(encoding.fromlocal(branch))
249 branchnodes = wireprototypes.encodelist(nodes)
249 branchnodes = wireprototypes.encodelist(nodes)
250 heads.append(b'%s %s' % (branchname, branchnodes))
250 heads.append(b'%s %s' % (branchname, branchnodes))
251
251
252 return wireprototypes.bytesresponse(b'\n'.join(heads))
252 return wireprototypes.bytesresponse(b'\n'.join(heads))
253
253
254
254
255 @wireprotocommand(b'branches', b'nodes', permission=b'pull')
255 @wireprotocommand(b'branches', b'nodes', permission=b'pull')
256 def branches(repo, proto, nodes):
256 def branches(repo, proto, nodes):
257 nodes = wireprototypes.decodelist(nodes)
257 nodes = wireprototypes.decodelist(nodes)
258 r = []
258 r = []
259 for b in repo.branches(nodes):
259 for b in repo.branches(nodes):
260 r.append(wireprototypes.encodelist(b) + b"\n")
260 r.append(wireprototypes.encodelist(b) + b"\n")
261
261
262 return wireprototypes.bytesresponse(b''.join(r))
262 return wireprototypes.bytesresponse(b''.join(r))
263
263
264
264
265 @wireprotocommand(b'clonebundles', b'', permission=b'pull')
265 @wireprotocommand(b'clonebundles', b'', permission=b'pull')
266 def clonebundles(repo, proto):
266 def clonebundles(repo, proto):
267 """Server command for returning info for available bundles to seed clones.
267 """Server command for returning info for available bundles to seed clones.
268
268
269 Clients will parse this response and determine what bundle to fetch.
269 Clients will parse this response and determine what bundle to fetch.
270
270
271 Extensions may wrap this command to filter or dynamically emit data
271 Extensions may wrap this command to filter or dynamically emit data
272 depending on the request. e.g. you could advertise URLs for the closest
272 depending on the request. e.g. you could advertise URLs for the closest
273 data center given the client's IP address.
273 data center given the client's IP address.
274 """
274 """
275 return wireprototypes.bytesresponse(
275 return wireprototypes.bytesresponse(
276 repo.vfs.tryread(b'clonebundles.manifest')
276 repo.vfs.tryread(bundlecaches.CB_MANIFEST_FILE)
277 )
277 )
278
278
279
279
280 wireprotocaps = [
280 wireprotocaps = [
281 b'lookup',
281 b'lookup',
282 b'branchmap',
282 b'branchmap',
283 b'pushkey',
283 b'pushkey',
284 b'known',
284 b'known',
285 b'getbundle',
285 b'getbundle',
286 b'unbundlehash',
286 b'unbundlehash',
287 ]
287 ]
288
288
289
289
290 def _capabilities(repo, proto):
290 def _capabilities(repo, proto):
291 """return a list of capabilities for a repo
291 """return a list of capabilities for a repo
292
292
293 This function exists to allow extensions to easily wrap capabilities
293 This function exists to allow extensions to easily wrap capabilities
294 computation
294 computation
295
295
296 - returns a lists: easy to alter
296 - returns a lists: easy to alter
297 - change done here will be propagated to both `capabilities` and `hello`
297 - change done here will be propagated to both `capabilities` and `hello`
298 command without any other action needed.
298 command without any other action needed.
299 """
299 """
300 # copy to prevent modification of the global list
300 # copy to prevent modification of the global list
301 caps = list(wireprotocaps)
301 caps = list(wireprotocaps)
302
302
303 # Command of same name as capability isn't exposed to version 1 of
303 # Command of same name as capability isn't exposed to version 1 of
304 # transports. So conditionally add it.
304 # transports. So conditionally add it.
305 if commands.commandavailable(b'changegroupsubset', proto):
305 if commands.commandavailable(b'changegroupsubset', proto):
306 caps.append(b'changegroupsubset')
306 caps.append(b'changegroupsubset')
307
307
308 if streamclone.allowservergeneration(repo):
308 if streamclone.allowservergeneration(repo):
309 if repo.ui.configbool(b'server', b'preferuncompressed'):
309 if repo.ui.configbool(b'server', b'preferuncompressed'):
310 caps.append(b'stream-preferred')
310 caps.append(b'stream-preferred')
311 requiredformats = repo.requirements & repo.supportedformats
311 requiredformats = repo.requirements & repo.supportedformats
312 # if our local revlogs are just revlogv1, add 'stream' cap
312 # if our local revlogs are just revlogv1, add 'stream' cap
313 if not requiredformats - {b'revlogv1'}:
313 if not requiredformats - {b'revlogv1'}:
314 caps.append(b'stream')
314 caps.append(b'stream')
315 # otherwise, add 'streamreqs' detailing our local revlog format
315 # otherwise, add 'streamreqs' detailing our local revlog format
316 else:
316 else:
317 caps.append(b'streamreqs=%s' % b','.join(sorted(requiredformats)))
317 caps.append(b'streamreqs=%s' % b','.join(sorted(requiredformats)))
318 if repo.ui.configbool(b'experimental', b'bundle2-advertise'):
318 if repo.ui.configbool(b'experimental', b'bundle2-advertise'):
319 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=b'server'))
319 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=b'server'))
320 caps.append(b'bundle2=' + urlreq.quote(capsblob))
320 caps.append(b'bundle2=' + urlreq.quote(capsblob))
321 caps.append(b'unbundle=%s' % b','.join(bundle2.bundlepriority))
321 caps.append(b'unbundle=%s' % b','.join(bundle2.bundlepriority))
322
322
323 if repo.ui.configbool(b'experimental', b'narrow'):
323 if repo.ui.configbool(b'experimental', b'narrow'):
324 caps.append(wireprototypes.NARROWCAP)
324 caps.append(wireprototypes.NARROWCAP)
325 if repo.ui.configbool(b'experimental', b'narrowservebrokenellipses'):
325 if repo.ui.configbool(b'experimental', b'narrowservebrokenellipses'):
326 caps.append(wireprototypes.ELLIPSESCAP)
326 caps.append(wireprototypes.ELLIPSESCAP)
327
327
328 return proto.addcapabilities(repo, caps)
328 return proto.addcapabilities(repo, caps)
329
329
330
330
331 # If you are writing an extension and consider wrapping this function. Wrap
331 # If you are writing an extension and consider wrapping this function. Wrap
332 # `_capabilities` instead.
332 # `_capabilities` instead.
333 @wireprotocommand(b'capabilities', permission=b'pull')
333 @wireprotocommand(b'capabilities', permission=b'pull')
334 def capabilities(repo, proto):
334 def capabilities(repo, proto):
335 caps = _capabilities(repo, proto)
335 caps = _capabilities(repo, proto)
336 return wireprototypes.bytesresponse(b' '.join(sorted(caps)))
336 return wireprototypes.bytesresponse(b' '.join(sorted(caps)))
337
337
338
338
339 @wireprotocommand(b'changegroup', b'roots', permission=b'pull')
339 @wireprotocommand(b'changegroup', b'roots', permission=b'pull')
340 def changegroup(repo, proto, roots):
340 def changegroup(repo, proto, roots):
341 nodes = wireprototypes.decodelist(roots)
341 nodes = wireprototypes.decodelist(roots)
342 outgoing = discovery.outgoing(
342 outgoing = discovery.outgoing(
343 repo, missingroots=nodes, ancestorsof=repo.heads()
343 repo, missingroots=nodes, ancestorsof=repo.heads()
344 )
344 )
345 cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve')
345 cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve')
346 gen = iter(lambda: cg.read(32768), b'')
346 gen = iter(lambda: cg.read(32768), b'')
347 return wireprototypes.streamres(gen=gen)
347 return wireprototypes.streamres(gen=gen)
348
348
349
349
350 @wireprotocommand(b'changegroupsubset', b'bases heads', permission=b'pull')
350 @wireprotocommand(b'changegroupsubset', b'bases heads', permission=b'pull')
351 def changegroupsubset(repo, proto, bases, heads):
351 def changegroupsubset(repo, proto, bases, heads):
352 bases = wireprototypes.decodelist(bases)
352 bases = wireprototypes.decodelist(bases)
353 heads = wireprototypes.decodelist(heads)
353 heads = wireprototypes.decodelist(heads)
354 outgoing = discovery.outgoing(repo, missingroots=bases, ancestorsof=heads)
354 outgoing = discovery.outgoing(repo, missingroots=bases, ancestorsof=heads)
355 cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve')
355 cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve')
356 gen = iter(lambda: cg.read(32768), b'')
356 gen = iter(lambda: cg.read(32768), b'')
357 return wireprototypes.streamres(gen=gen)
357 return wireprototypes.streamres(gen=gen)
358
358
359
359
360 @wireprotocommand(b'debugwireargs', b'one two *', permission=b'pull')
360 @wireprotocommand(b'debugwireargs', b'one two *', permission=b'pull')
361 def debugwireargs(repo, proto, one, two, others):
361 def debugwireargs(repo, proto, one, two, others):
362 # only accept optional args from the known set
362 # only accept optional args from the known set
363 opts = options(b'debugwireargs', [b'three', b'four'], others)
363 opts = options(b'debugwireargs', [b'three', b'four'], others)
364 return wireprototypes.bytesresponse(
364 return wireprototypes.bytesresponse(
365 repo.debugwireargs(one, two, **pycompat.strkwargs(opts))
365 repo.debugwireargs(one, two, **pycompat.strkwargs(opts))
366 )
366 )
367
367
368
368
369 def find_pullbundle(repo, proto, opts, clheads, heads, common):
369 def find_pullbundle(repo, proto, opts, clheads, heads, common):
370 """Return a file object for the first matching pullbundle.
370 """Return a file object for the first matching pullbundle.
371
371
372 Pullbundles are specified in .hg/pullbundles.manifest similar to
372 Pullbundles are specified in .hg/pullbundles.manifest similar to
373 clonebundles.
373 clonebundles.
374 For each entry, the bundle specification is checked for compatibility:
374 For each entry, the bundle specification is checked for compatibility:
375 - Client features vs the BUNDLESPEC.
375 - Client features vs the BUNDLESPEC.
376 - Revisions shared with the clients vs base revisions of the bundle.
376 - Revisions shared with the clients vs base revisions of the bundle.
377 A bundle can be applied only if all its base revisions are known by
377 A bundle can be applied only if all its base revisions are known by
378 the client.
378 the client.
379 - At least one leaf of the bundle's DAG is missing on the client.
379 - At least one leaf of the bundle's DAG is missing on the client.
380 - Every leaf of the bundle's DAG is part of node set the client wants.
380 - Every leaf of the bundle's DAG is part of node set the client wants.
381 E.g. do not send a bundle of all changes if the client wants only
381 E.g. do not send a bundle of all changes if the client wants only
382 one specific branch of many.
382 one specific branch of many.
383 """
383 """
384
384
385 def decodehexstring(s):
385 def decodehexstring(s):
386 return {binascii.unhexlify(h) for h in s.split(b';')}
386 return {binascii.unhexlify(h) for h in s.split(b';')}
387
387
388 manifest = repo.vfs.tryread(b'pullbundles.manifest')
388 manifest = repo.vfs.tryread(b'pullbundles.manifest')
389 if not manifest:
389 if not manifest:
390 return None
390 return None
391 res = bundlecaches.parseclonebundlesmanifest(repo, manifest)
391 res = bundlecaches.parseclonebundlesmanifest(repo, manifest)
392 res = bundlecaches.filterclonebundleentries(repo, res)
392 res = bundlecaches.filterclonebundleentries(repo, res)
393 if not res:
393 if not res:
394 return None
394 return None
395 cl = repo.unfiltered().changelog
395 cl = repo.unfiltered().changelog
396 heads_anc = cl.ancestors([cl.rev(rev) for rev in heads], inclusive=True)
396 heads_anc = cl.ancestors([cl.rev(rev) for rev in heads], inclusive=True)
397 common_anc = cl.ancestors([cl.rev(rev) for rev in common], inclusive=True)
397 common_anc = cl.ancestors([cl.rev(rev) for rev in common], inclusive=True)
398 compformats = clientcompressionsupport(proto)
398 compformats = clientcompressionsupport(proto)
399 for entry in res:
399 for entry in res:
400 comp = entry.get(b'COMPRESSION')
400 comp = entry.get(b'COMPRESSION')
401 altcomp = util.compengines._bundlenames.get(comp)
401 altcomp = util.compengines._bundlenames.get(comp)
402 if comp and comp not in compformats and altcomp not in compformats:
402 if comp and comp not in compformats and altcomp not in compformats:
403 continue
403 continue
404 # No test yet for VERSION, since V2 is supported by any client
404 # No test yet for VERSION, since V2 is supported by any client
405 # that advertises partial pulls
405 # that advertises partial pulls
406 if b'heads' in entry:
406 if b'heads' in entry:
407 try:
407 try:
408 bundle_heads = decodehexstring(entry[b'heads'])
408 bundle_heads = decodehexstring(entry[b'heads'])
409 except TypeError:
409 except TypeError:
410 # Bad heads entry
410 # Bad heads entry
411 continue
411 continue
412 if bundle_heads.issubset(common):
412 if bundle_heads.issubset(common):
413 continue # Nothing new
413 continue # Nothing new
414 if all(cl.rev(rev) in common_anc for rev in bundle_heads):
414 if all(cl.rev(rev) in common_anc for rev in bundle_heads):
415 continue # Still nothing new
415 continue # Still nothing new
416 if any(
416 if any(
417 cl.rev(rev) not in heads_anc and cl.rev(rev) not in common_anc
417 cl.rev(rev) not in heads_anc and cl.rev(rev) not in common_anc
418 for rev in bundle_heads
418 for rev in bundle_heads
419 ):
419 ):
420 continue
420 continue
421 if b'bases' in entry:
421 if b'bases' in entry:
422 try:
422 try:
423 bundle_bases = decodehexstring(entry[b'bases'])
423 bundle_bases = decodehexstring(entry[b'bases'])
424 except TypeError:
424 except TypeError:
425 # Bad bases entry
425 # Bad bases entry
426 continue
426 continue
427 if not all(cl.rev(rev) in common_anc for rev in bundle_bases):
427 if not all(cl.rev(rev) in common_anc for rev in bundle_bases):
428 continue
428 continue
429 path = entry[b'URL']
429 path = entry[b'URL']
430 repo.ui.debug(b'sending pullbundle "%s"\n' % path)
430 repo.ui.debug(b'sending pullbundle "%s"\n' % path)
431 try:
431 try:
432 return repo.vfs.open(path)
432 return repo.vfs.open(path)
433 except IOError:
433 except IOError:
434 repo.ui.debug(b'pullbundle "%s" not accessible\n' % path)
434 repo.ui.debug(b'pullbundle "%s" not accessible\n' % path)
435 continue
435 continue
436 return None
436 return None
437
437
438
438
439 @wireprotocommand(b'getbundle', b'*', permission=b'pull')
439 @wireprotocommand(b'getbundle', b'*', permission=b'pull')
440 def getbundle(repo, proto, others):
440 def getbundle(repo, proto, others):
441 opts = options(
441 opts = options(
442 b'getbundle', wireprototypes.GETBUNDLE_ARGUMENTS.keys(), others
442 b'getbundle', wireprototypes.GETBUNDLE_ARGUMENTS.keys(), others
443 )
443 )
444 for k, v in pycompat.iteritems(opts):
444 for k, v in pycompat.iteritems(opts):
445 keytype = wireprototypes.GETBUNDLE_ARGUMENTS[k]
445 keytype = wireprototypes.GETBUNDLE_ARGUMENTS[k]
446 if keytype == b'nodes':
446 if keytype == b'nodes':
447 opts[k] = wireprototypes.decodelist(v)
447 opts[k] = wireprototypes.decodelist(v)
448 elif keytype == b'csv':
448 elif keytype == b'csv':
449 opts[k] = list(v.split(b','))
449 opts[k] = list(v.split(b','))
450 elif keytype == b'scsv':
450 elif keytype == b'scsv':
451 opts[k] = set(v.split(b','))
451 opts[k] = set(v.split(b','))
452 elif keytype == b'boolean':
452 elif keytype == b'boolean':
453 # Client should serialize False as '0', which is a non-empty string
453 # Client should serialize False as '0', which is a non-empty string
454 # so it evaluates as a True bool.
454 # so it evaluates as a True bool.
455 if v == b'0':
455 if v == b'0':
456 opts[k] = False
456 opts[k] = False
457 else:
457 else:
458 opts[k] = bool(v)
458 opts[k] = bool(v)
459 elif keytype != b'plain':
459 elif keytype != b'plain':
460 raise KeyError(b'unknown getbundle option type %s' % keytype)
460 raise KeyError(b'unknown getbundle option type %s' % keytype)
461
461
462 if not bundle1allowed(repo, b'pull'):
462 if not bundle1allowed(repo, b'pull'):
463 if not exchange.bundle2requested(opts.get(b'bundlecaps')):
463 if not exchange.bundle2requested(opts.get(b'bundlecaps')):
464 if proto.name == b'http-v1':
464 if proto.name == b'http-v1':
465 return wireprototypes.ooberror(bundle2required)
465 return wireprototypes.ooberror(bundle2required)
466 raise error.Abort(bundle2requiredmain, hint=bundle2requiredhint)
466 raise error.Abort(bundle2requiredmain, hint=bundle2requiredhint)
467
467
468 try:
468 try:
469 clheads = set(repo.changelog.heads())
469 clheads = set(repo.changelog.heads())
470 heads = set(opts.get(b'heads', set()))
470 heads = set(opts.get(b'heads', set()))
471 common = set(opts.get(b'common', set()))
471 common = set(opts.get(b'common', set()))
472 common.discard(nullid)
472 common.discard(nullid)
473 if (
473 if (
474 repo.ui.configbool(b'server', b'pullbundle')
474 repo.ui.configbool(b'server', b'pullbundle')
475 and b'partial-pull' in proto.getprotocaps()
475 and b'partial-pull' in proto.getprotocaps()
476 ):
476 ):
477 # Check if a pre-built bundle covers this request.
477 # Check if a pre-built bundle covers this request.
478 bundle = find_pullbundle(repo, proto, opts, clheads, heads, common)
478 bundle = find_pullbundle(repo, proto, opts, clheads, heads, common)
479 if bundle:
479 if bundle:
480 return wireprototypes.streamres(
480 return wireprototypes.streamres(
481 gen=util.filechunkiter(bundle), prefer_uncompressed=True
481 gen=util.filechunkiter(bundle), prefer_uncompressed=True
482 )
482 )
483
483
484 if repo.ui.configbool(b'server', b'disablefullbundle'):
484 if repo.ui.configbool(b'server', b'disablefullbundle'):
485 # Check to see if this is a full clone.
485 # Check to see if this is a full clone.
486 changegroup = opts.get(b'cg', True)
486 changegroup = opts.get(b'cg', True)
487 if changegroup and not common and clheads == heads:
487 if changegroup and not common and clheads == heads:
488 raise error.Abort(
488 raise error.Abort(
489 _(b'server has pull-based clones disabled'),
489 _(b'server has pull-based clones disabled'),
490 hint=_(b'remove --pull if specified or upgrade Mercurial'),
490 hint=_(b'remove --pull if specified or upgrade Mercurial'),
491 )
491 )
492
492
493 info, chunks = exchange.getbundlechunks(
493 info, chunks = exchange.getbundlechunks(
494 repo, b'serve', **pycompat.strkwargs(opts)
494 repo, b'serve', **pycompat.strkwargs(opts)
495 )
495 )
496 prefercompressed = info.get(b'prefercompressed', True)
496 prefercompressed = info.get(b'prefercompressed', True)
497 except error.Abort as exc:
497 except error.Abort as exc:
498 # cleanly forward Abort error to the client
498 # cleanly forward Abort error to the client
499 if not exchange.bundle2requested(opts.get(b'bundlecaps')):
499 if not exchange.bundle2requested(opts.get(b'bundlecaps')):
500 if proto.name == b'http-v1':
500 if proto.name == b'http-v1':
501 return wireprototypes.ooberror(exc.message + b'\n')
501 return wireprototypes.ooberror(exc.message + b'\n')
502 raise # cannot do better for bundle1 + ssh
502 raise # cannot do better for bundle1 + ssh
503 # bundle2 request expect a bundle2 reply
503 # bundle2 request expect a bundle2 reply
504 bundler = bundle2.bundle20(repo.ui)
504 bundler = bundle2.bundle20(repo.ui)
505 manargs = [(b'message', exc.message)]
505 manargs = [(b'message', exc.message)]
506 advargs = []
506 advargs = []
507 if exc.hint is not None:
507 if exc.hint is not None:
508 advargs.append((b'hint', exc.hint))
508 advargs.append((b'hint', exc.hint))
509 bundler.addpart(bundle2.bundlepart(b'error:abort', manargs, advargs))
509 bundler.addpart(bundle2.bundlepart(b'error:abort', manargs, advargs))
510 chunks = bundler.getchunks()
510 chunks = bundler.getchunks()
511 prefercompressed = False
511 prefercompressed = False
512
512
513 return wireprototypes.streamres(
513 return wireprototypes.streamres(
514 gen=chunks, prefer_uncompressed=not prefercompressed
514 gen=chunks, prefer_uncompressed=not prefercompressed
515 )
515 )
516
516
517
517
518 @wireprotocommand(b'heads', permission=b'pull')
518 @wireprotocommand(b'heads', permission=b'pull')
519 def heads(repo, proto):
519 def heads(repo, proto):
520 h = repo.heads()
520 h = repo.heads()
521 return wireprototypes.bytesresponse(wireprototypes.encodelist(h) + b'\n')
521 return wireprototypes.bytesresponse(wireprototypes.encodelist(h) + b'\n')
522
522
523
523
524 @wireprotocommand(b'hello', permission=b'pull')
524 @wireprotocommand(b'hello', permission=b'pull')
525 def hello(repo, proto):
525 def hello(repo, proto):
526 """Called as part of SSH handshake to obtain server info.
526 """Called as part of SSH handshake to obtain server info.
527
527
528 Returns a list of lines describing interesting things about the
528 Returns a list of lines describing interesting things about the
529 server, in an RFC822-like format.
529 server, in an RFC822-like format.
530
530
531 Currently, the only one defined is ``capabilities``, which consists of a
531 Currently, the only one defined is ``capabilities``, which consists of a
532 line of space separated tokens describing server abilities:
532 line of space separated tokens describing server abilities:
533
533
534 capabilities: <token0> <token1> <token2>
534 capabilities: <token0> <token1> <token2>
535 """
535 """
536 caps = capabilities(repo, proto).data
536 caps = capabilities(repo, proto).data
537 return wireprototypes.bytesresponse(b'capabilities: %s\n' % caps)
537 return wireprototypes.bytesresponse(b'capabilities: %s\n' % caps)
538
538
539
539
540 @wireprotocommand(b'listkeys', b'namespace', permission=b'pull')
540 @wireprotocommand(b'listkeys', b'namespace', permission=b'pull')
541 def listkeys(repo, proto, namespace):
541 def listkeys(repo, proto, namespace):
542 d = sorted(repo.listkeys(encoding.tolocal(namespace)).items())
542 d = sorted(repo.listkeys(encoding.tolocal(namespace)).items())
543 return wireprototypes.bytesresponse(pushkeymod.encodekeys(d))
543 return wireprototypes.bytesresponse(pushkeymod.encodekeys(d))
544
544
545
545
546 @wireprotocommand(b'lookup', b'key', permission=b'pull')
546 @wireprotocommand(b'lookup', b'key', permission=b'pull')
547 def lookup(repo, proto, key):
547 def lookup(repo, proto, key):
548 try:
548 try:
549 k = encoding.tolocal(key)
549 k = encoding.tolocal(key)
550 n = repo.lookup(k)
550 n = repo.lookup(k)
551 r = hex(n)
551 r = hex(n)
552 success = 1
552 success = 1
553 except Exception as inst:
553 except Exception as inst:
554 r = stringutil.forcebytestr(inst)
554 r = stringutil.forcebytestr(inst)
555 success = 0
555 success = 0
556 return wireprototypes.bytesresponse(b'%d %s\n' % (success, r))
556 return wireprototypes.bytesresponse(b'%d %s\n' % (success, r))
557
557
558
558
559 @wireprotocommand(b'known', b'nodes *', permission=b'pull')
559 @wireprotocommand(b'known', b'nodes *', permission=b'pull')
560 def known(repo, proto, nodes, others):
560 def known(repo, proto, nodes, others):
561 v = b''.join(
561 v = b''.join(
562 b and b'1' or b'0' for b in repo.known(wireprototypes.decodelist(nodes))
562 b and b'1' or b'0' for b in repo.known(wireprototypes.decodelist(nodes))
563 )
563 )
564 return wireprototypes.bytesresponse(v)
564 return wireprototypes.bytesresponse(v)
565
565
566
566
567 @wireprotocommand(b'protocaps', b'caps', permission=b'pull')
567 @wireprotocommand(b'protocaps', b'caps', permission=b'pull')
568 def protocaps(repo, proto, caps):
568 def protocaps(repo, proto, caps):
569 if proto.name == wireprototypes.SSHV1:
569 if proto.name == wireprototypes.SSHV1:
570 proto._protocaps = set(caps.split(b' '))
570 proto._protocaps = set(caps.split(b' '))
571 return wireprototypes.bytesresponse(b'OK')
571 return wireprototypes.bytesresponse(b'OK')
572
572
573
573
574 @wireprotocommand(b'pushkey', b'namespace key old new', permission=b'push')
574 @wireprotocommand(b'pushkey', b'namespace key old new', permission=b'push')
575 def pushkey(repo, proto, namespace, key, old, new):
575 def pushkey(repo, proto, namespace, key, old, new):
576 # compatibility with pre-1.8 clients which were accidentally
576 # compatibility with pre-1.8 clients which were accidentally
577 # sending raw binary nodes rather than utf-8-encoded hex
577 # sending raw binary nodes rather than utf-8-encoded hex
578 if len(new) == 20 and stringutil.escapestr(new) != new:
578 if len(new) == 20 and stringutil.escapestr(new) != new:
579 # looks like it could be a binary node
579 # looks like it could be a binary node
580 try:
580 try:
581 new.decode('utf-8')
581 new.decode('utf-8')
582 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
582 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
583 except UnicodeDecodeError:
583 except UnicodeDecodeError:
584 pass # binary, leave unmodified
584 pass # binary, leave unmodified
585 else:
585 else:
586 new = encoding.tolocal(new) # normal path
586 new = encoding.tolocal(new) # normal path
587
587
588 with proto.mayberedirectstdio() as output:
588 with proto.mayberedirectstdio() as output:
589 r = (
589 r = (
590 repo.pushkey(
590 repo.pushkey(
591 encoding.tolocal(namespace),
591 encoding.tolocal(namespace),
592 encoding.tolocal(key),
592 encoding.tolocal(key),
593 encoding.tolocal(old),
593 encoding.tolocal(old),
594 new,
594 new,
595 )
595 )
596 or False
596 or False
597 )
597 )
598
598
599 output = output.getvalue() if output else b''
599 output = output.getvalue() if output else b''
600 return wireprototypes.bytesresponse(b'%d\n%s' % (int(r), output))
600 return wireprototypes.bytesresponse(b'%d\n%s' % (int(r), output))
601
601
602
602
603 @wireprotocommand(b'stream_out', permission=b'pull')
603 @wireprotocommand(b'stream_out', permission=b'pull')
604 def stream(repo, proto):
604 def stream(repo, proto):
605 '''If the server supports streaming clone, it advertises the "stream"
605 '''If the server supports streaming clone, it advertises the "stream"
606 capability with a value representing the version and flags of the repo
606 capability with a value representing the version and flags of the repo
607 it is serving. Client checks to see if it understands the format.
607 it is serving. Client checks to see if it understands the format.
608 '''
608 '''
609 return wireprototypes.streamreslegacy(streamclone.generatev1wireproto(repo))
609 return wireprototypes.streamreslegacy(streamclone.generatev1wireproto(repo))
610
610
611
611
612 @wireprotocommand(b'unbundle', b'heads', permission=b'push')
612 @wireprotocommand(b'unbundle', b'heads', permission=b'push')
613 def unbundle(repo, proto, heads):
613 def unbundle(repo, proto, heads):
614 their_heads = wireprototypes.decodelist(heads)
614 their_heads = wireprototypes.decodelist(heads)
615
615
616 with proto.mayberedirectstdio() as output:
616 with proto.mayberedirectstdio() as output:
617 try:
617 try:
618 exchange.check_heads(repo, their_heads, b'preparing changes')
618 exchange.check_heads(repo, their_heads, b'preparing changes')
619 cleanup = lambda: None
619 cleanup = lambda: None
620 try:
620 try:
621 payload = proto.getpayload()
621 payload = proto.getpayload()
622 if repo.ui.configbool(b'server', b'streamunbundle'):
622 if repo.ui.configbool(b'server', b'streamunbundle'):
623
623
624 def cleanup():
624 def cleanup():
625 # Ensure that the full payload is consumed, so
625 # Ensure that the full payload is consumed, so
626 # that the connection doesn't contain trailing garbage.
626 # that the connection doesn't contain trailing garbage.
627 for p in payload:
627 for p in payload:
628 pass
628 pass
629
629
630 fp = util.chunkbuffer(payload)
630 fp = util.chunkbuffer(payload)
631 else:
631 else:
632 # write bundle data to temporary file as it can be big
632 # write bundle data to temporary file as it can be big
633 fp, tempname = None, None
633 fp, tempname = None, None
634
634
635 def cleanup():
635 def cleanup():
636 if fp:
636 if fp:
637 fp.close()
637 fp.close()
638 if tempname:
638 if tempname:
639 os.unlink(tempname)
639 os.unlink(tempname)
640
640
641 fd, tempname = pycompat.mkstemp(prefix=b'hg-unbundle-')
641 fd, tempname = pycompat.mkstemp(prefix=b'hg-unbundle-')
642 repo.ui.debug(
642 repo.ui.debug(
643 b'redirecting incoming bundle to %s\n' % tempname
643 b'redirecting incoming bundle to %s\n' % tempname
644 )
644 )
645 fp = os.fdopen(fd, pycompat.sysstr(b'wb+'))
645 fp = os.fdopen(fd, pycompat.sysstr(b'wb+'))
646 for p in payload:
646 for p in payload:
647 fp.write(p)
647 fp.write(p)
648 fp.seek(0)
648 fp.seek(0)
649
649
650 gen = exchange.readbundle(repo.ui, fp, None)
650 gen = exchange.readbundle(repo.ui, fp, None)
651 if isinstance(
651 if isinstance(
652 gen, changegroupmod.cg1unpacker
652 gen, changegroupmod.cg1unpacker
653 ) and not bundle1allowed(repo, b'push'):
653 ) and not bundle1allowed(repo, b'push'):
654 if proto.name == b'http-v1':
654 if proto.name == b'http-v1':
655 # need to special case http because stderr do not get to
655 # need to special case http because stderr do not get to
656 # the http client on failed push so we need to abuse
656 # the http client on failed push so we need to abuse
657 # some other error type to make sure the message get to
657 # some other error type to make sure the message get to
658 # the user.
658 # the user.
659 return wireprototypes.ooberror(bundle2required)
659 return wireprototypes.ooberror(bundle2required)
660 raise error.Abort(
660 raise error.Abort(
661 bundle2requiredmain, hint=bundle2requiredhint
661 bundle2requiredmain, hint=bundle2requiredhint
662 )
662 )
663
663
664 r = exchange.unbundle(
664 r = exchange.unbundle(
665 repo, gen, their_heads, b'serve', proto.client()
665 repo, gen, their_heads, b'serve', proto.client()
666 )
666 )
667 if util.safehasattr(r, b'addpart'):
667 if util.safehasattr(r, b'addpart'):
668 # The return looks streamable, we are in the bundle2 case
668 # The return looks streamable, we are in the bundle2 case
669 # and should return a stream.
669 # and should return a stream.
670 return wireprototypes.streamreslegacy(gen=r.getchunks())
670 return wireprototypes.streamreslegacy(gen=r.getchunks())
671 return wireprototypes.pushres(
671 return wireprototypes.pushres(
672 r, output.getvalue() if output else b''
672 r, output.getvalue() if output else b''
673 )
673 )
674
674
675 finally:
675 finally:
676 cleanup()
676 cleanup()
677
677
678 except (error.BundleValueError, error.Abort, error.PushRaced) as exc:
678 except (error.BundleValueError, error.Abort, error.PushRaced) as exc:
679 # handle non-bundle2 case first
679 # handle non-bundle2 case first
680 if not getattr(exc, 'duringunbundle2', False):
680 if not getattr(exc, 'duringunbundle2', False):
681 try:
681 try:
682 raise
682 raise
683 except error.Abort as exc:
683 except error.Abort as exc:
684 # The old code we moved used procutil.stderr directly.
684 # The old code we moved used procutil.stderr directly.
685 # We did not change it to minimise code change.
685 # We did not change it to minimise code change.
686 # This need to be moved to something proper.
686 # This need to be moved to something proper.
687 # Feel free to do it.
687 # Feel free to do it.
688 procutil.stderr.write(b"abort: %s\n" % exc.message)
688 procutil.stderr.write(b"abort: %s\n" % exc.message)
689 if exc.hint is not None:
689 if exc.hint is not None:
690 procutil.stderr.write(b"(%s)\n" % exc.hint)
690 procutil.stderr.write(b"(%s)\n" % exc.hint)
691 procutil.stderr.flush()
691 procutil.stderr.flush()
692 return wireprototypes.pushres(
692 return wireprototypes.pushres(
693 0, output.getvalue() if output else b''
693 0, output.getvalue() if output else b''
694 )
694 )
695 except error.PushRaced:
695 except error.PushRaced:
696 return wireprototypes.pusherr(
696 return wireprototypes.pusherr(
697 pycompat.bytestr(exc),
697 pycompat.bytestr(exc),
698 output.getvalue() if output else b'',
698 output.getvalue() if output else b'',
699 )
699 )
700
700
701 bundler = bundle2.bundle20(repo.ui)
701 bundler = bundle2.bundle20(repo.ui)
702 for out in getattr(exc, '_bundle2salvagedoutput', ()):
702 for out in getattr(exc, '_bundle2salvagedoutput', ()):
703 bundler.addpart(out)
703 bundler.addpart(out)
704 try:
704 try:
705 try:
705 try:
706 raise
706 raise
707 except error.PushkeyFailed as exc:
707 except error.PushkeyFailed as exc:
708 # check client caps
708 # check client caps
709 remotecaps = getattr(exc, '_replycaps', None)
709 remotecaps = getattr(exc, '_replycaps', None)
710 if (
710 if (
711 remotecaps is not None
711 remotecaps is not None
712 and b'pushkey' not in remotecaps.get(b'error', ())
712 and b'pushkey' not in remotecaps.get(b'error', ())
713 ):
713 ):
714 # no support remote side, fallback to Abort handler.
714 # no support remote side, fallback to Abort handler.
715 raise
715 raise
716 part = bundler.newpart(b'error:pushkey')
716 part = bundler.newpart(b'error:pushkey')
717 part.addparam(b'in-reply-to', exc.partid)
717 part.addparam(b'in-reply-to', exc.partid)
718 if exc.namespace is not None:
718 if exc.namespace is not None:
719 part.addparam(
719 part.addparam(
720 b'namespace', exc.namespace, mandatory=False
720 b'namespace', exc.namespace, mandatory=False
721 )
721 )
722 if exc.key is not None:
722 if exc.key is not None:
723 part.addparam(b'key', exc.key, mandatory=False)
723 part.addparam(b'key', exc.key, mandatory=False)
724 if exc.new is not None:
724 if exc.new is not None:
725 part.addparam(b'new', exc.new, mandatory=False)
725 part.addparam(b'new', exc.new, mandatory=False)
726 if exc.old is not None:
726 if exc.old is not None:
727 part.addparam(b'old', exc.old, mandatory=False)
727 part.addparam(b'old', exc.old, mandatory=False)
728 if exc.ret is not None:
728 if exc.ret is not None:
729 part.addparam(b'ret', exc.ret, mandatory=False)
729 part.addparam(b'ret', exc.ret, mandatory=False)
730 except error.BundleValueError as exc:
730 except error.BundleValueError as exc:
731 errpart = bundler.newpart(b'error:unsupportedcontent')
731 errpart = bundler.newpart(b'error:unsupportedcontent')
732 if exc.parttype is not None:
732 if exc.parttype is not None:
733 errpart.addparam(b'parttype', exc.parttype)
733 errpart.addparam(b'parttype', exc.parttype)
734 if exc.params:
734 if exc.params:
735 errpart.addparam(b'params', b'\0'.join(exc.params))
735 errpart.addparam(b'params', b'\0'.join(exc.params))
736 except error.Abort as exc:
736 except error.Abort as exc:
737 manargs = [(b'message', exc.message)]
737 manargs = [(b'message', exc.message)]
738 advargs = []
738 advargs = []
739 if exc.hint is not None:
739 if exc.hint is not None:
740 advargs.append((b'hint', exc.hint))
740 advargs.append((b'hint', exc.hint))
741 bundler.addpart(
741 bundler.addpart(
742 bundle2.bundlepart(b'error:abort', manargs, advargs)
742 bundle2.bundlepart(b'error:abort', manargs, advargs)
743 )
743 )
744 except error.PushRaced as exc:
744 except error.PushRaced as exc:
745 bundler.newpart(
745 bundler.newpart(
746 b'error:pushraced',
746 b'error:pushraced',
747 [(b'message', stringutil.forcebytestr(exc))],
747 [(b'message', stringutil.forcebytestr(exc))],
748 )
748 )
749 return wireprototypes.streamreslegacy(gen=bundler.getchunks())
749 return wireprototypes.streamreslegacy(gen=bundler.getchunks())
General Comments 0
You need to be logged in to leave comments. Login now