##// END OF EJS Templates
change the default timeout to 600 seconds
Benoit Boissinot -
r1788:750b9cd8 default
parent child Browse files
Show More
@@ -1,313 +1,313 b''
1 1 HGRC(5)
2 2 =======
3 3 Bryan O'Sullivan <bos@serpentine.com>
4 4
5 5 NAME
6 6 ----
7 7 hgrc - configuration files for Mercurial
8 8
9 9 SYNOPSIS
10 10 --------
11 11
12 12 The Mercurial system uses a set of configuration files to control
13 13 aspects of its behaviour.
14 14
15 15 FILES
16 16 -----
17 17
18 18 Mercurial reads configuration data from several files, if they exist.
19 19 The names of these files depend on the system on which Mercurial is
20 20 installed.
21 21
22 22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 23 (Unix) <install-root>/etc/mercurial/hgrc::
24 24 Per-installation configuration files, searched for in the
25 25 directory where Mercurial is installed. For example, if installed
26 26 in /shared/tools, Mercurial will look in
27 27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 28 all Mercurial commands executed by any user in any directory.
29 29
30 30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 31 (Unix) /etc/mercurial/hgrc::
32 32 (Windows) C:\Mercurial\Mercurial.ini::
33 33 Per-system configuration files, for the system on which Mercurial
34 34 is running. Options in these files apply to all Mercurial
35 35 commands executed by any user in any directory. Options in these
36 36 files override per-installation options.
37 37
38 38 (Unix) $HOME/.hgrc::
39 39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
40 40 Per-user configuration file, for the user running Mercurial.
41 41 Options in this file apply to all Mercurial commands executed by
42 42 any user in any directory. Options in this file override
43 43 per-installation and per-system options.
44 44
45 45 (Unix, Windows) <repo>/.hg/hgrc::
46 46 Per-repository configuration options that only apply in a
47 47 particular repository. This file is not version-controlled, and
48 48 will not get transferred during a "clone" operation. Options in
49 49 this file override options in all other configuration files.
50 50
51 51 SYNTAX
52 52 ------
53 53
54 54 A configuration file consists of sections, led by a "[section]" header
55 55 and followed by "name: value" entries; "name=value" is also accepted.
56 56
57 57 [spam]
58 58 eggs=ham
59 59 green=
60 60 eggs
61 61
62 62 Each line contains one entry. If the lines that follow are indented,
63 63 they are treated as continuations of that entry.
64 64
65 65 Leading whitespace is removed from values. Empty lines are skipped.
66 66
67 67 The optional values can contain format strings which refer to other
68 68 values in the same section, or values in a special DEFAULT section.
69 69
70 70 Lines beginning with "#" or ";" are ignored and may be used to provide
71 71 comments.
72 72
73 73 SECTIONS
74 74 --------
75 75
76 76 This section describes the different sections that may appear in a
77 77 Mercurial "hgrc" file, the purpose of each section, its possible
78 78 keys, and their possible values.
79 79
80 80 decode/encode::
81 81 Filters for transforming files on checkout/checkin. This would
82 82 typically be used for newline processing or other
83 83 localization/canonicalization of files.
84 84
85 85 Filters consist of a filter pattern followed by a filter command.
86 86 Filter patterns are globs by default, rooted at the repository
87 87 root. For example, to match any file ending in ".txt" in the root
88 88 directory only, use the pattern "*.txt". To match any file ending
89 89 in ".c" anywhere in the repository, use the pattern "**.c".
90 90
91 91 The filter command can start with a specifier, either "pipe:" or
92 92 "tempfile:". If no specifier is given, "pipe:" is used by default.
93 93
94 94 A "pipe:" command must accept data on stdin and return the
95 95 transformed data on stdout.
96 96
97 97 Pipe example:
98 98
99 99 [encode]
100 100 # uncompress gzip files on checkin to improve delta compression
101 101 # note: not necessarily a good idea, just an example
102 102 *.gz = pipe: gunzip
103 103
104 104 [decode]
105 105 # recompress gzip files when writing them to the working dir (we
106 106 # can safely omit "pipe:", because it's the default)
107 107 *.gz = gzip
108 108
109 109 A "tempfile:" command is a template. The string INFILE is replaced
110 110 with the name of a temporary file that contains the data to be
111 111 filtered by the command. The string OUTFILE is replaced with the
112 112 name of an empty temporary file, where the filtered data must be
113 113 written by the command.
114 114
115 115 NOTE: the tempfile mechanism is recommended for Windows systems,
116 116 where the standard shell I/O redirection operators often have
117 117 strange effects. In particular, if you are doing line ending
118 118 conversion on Windows using the popular dos2unix and unix2dos
119 119 programs, you *must* use the tempfile mechanism, as using pipes will
120 120 corrupt the contents of your files.
121 121
122 122 Tempfile example:
123 123
124 124 [encode]
125 125 # convert files to unix line ending conventions on checkin
126 126 **.txt = tempfile: dos2unix -n INFILE OUTFILE
127 127
128 128 [decode]
129 129 # convert files to windows line ending conventions when writing
130 130 # them to the working dir
131 131 **.txt = tempfile: unix2dos -n INFILE OUTFILE
132 132
133 133 hooks::
134 134 Commands that get automatically executed by various actions such as
135 135 starting or finishing a commit. Multiple commands can be run for
136 136 the same action by appending a suffix to the action. Overriding a
137 137 site-wide hook can be done by changing its value or setting it to
138 138 an empty string.
139 139
140 140 Example .hg/hgrc:
141 141
142 142 [hooks]
143 143 # do not use the site-wide hook
144 144 incoming =
145 145 incoming.email = /my/email/hook
146 146 incoming.autobuild = /my/build/hook
147 147
148 148 Most hooks are run with environment variables set that give added
149 149 useful information. For each hook below, the environment variables
150 150 it is passed are listed with names of the form "$HG_foo".
151 151
152 152 changegroup;;
153 153 Run after a changegroup has been added via push, pull or
154 154 unbundle. ID of the first new changeset is in $HG_NODE.
155 155 commit;;
156 156 Run after a changeset has been created in the local repository.
157 157 ID of the newly created changeset is in $HG_NODE. Parent
158 158 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
159 159 incoming;;
160 160 Run after a changeset has been pulled, pushed, or unbundled into
161 161 the local repository. The ID of the newly arrived changeset is in
162 162 $HG_NODE.
163 163 outgoing;;
164 164 Run after sending changes from local repository to another. ID of
165 165 first changeset sent is in $HG_NODE. Source of operation is in
166 166 $HG_SOURCE; see "preoutgoing" hook for description.
167 167 prechangegroup;;
168 168 Run before a changegroup is added via push, pull or unbundle.
169 169 Exit status 0 allows the changegroup to proceed. Non-zero status
170 170 will cause the push, pull or unbundle to fail.
171 171 precommit;;
172 172 Run before starting a local commit. Exit status 0 allows the
173 173 commit to proceed. Non-zero status will cause the commit to fail.
174 174 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
175 175 preoutgoing;;
176 176 Run before computing changes to send from the local repository to
177 177 another. Non-zero status will cause failure. This lets you
178 178 prevent pull over http or ssh. Also prevents against local pull,
179 179 push (outbound) or bundle commands, but not effective, since you
180 180 can just copy files instead then. Source of operation is in
181 181 $HG_SOURCE. If "serve", operation is happening on behalf of
182 182 remote ssh or http repository. If "push", "pull" or "bundle",
183 183 operation is happening on behalf of repository on same system.
184 184 pretag;;
185 185 Run before creating a tag. Exit status 0 allows the tag to be
186 186 created. Non-zero status will cause the tag to fail. ID of
187 187 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
188 188 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
189 189 pretxnchangegroup;;
190 190 Run after a changegroup has been added via push, pull or unbundle,
191 191 but before the transaction has been committed. Changegroup is
192 192 visible to hook program. This lets you validate incoming changes
193 193 before accepting them. Passed the ID of the first new changeset
194 194 in $HG_NODE. Exit status 0 allows the transaction to commit.
195 195 Non-zero status will cause the transaction to be rolled back and
196 196 the push, pull or unbundle will fail.
197 197 pretxncommit;;
198 198 Run after a changeset has been created but the transaction not yet
199 199 committed. Changeset is visible to hook program. This lets you
200 200 validate commit message and changes. Exit status 0 allows the
201 201 commit to proceed. Non-zero status will cause the transaction to
202 202 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
203 203 IDs are in $HG_PARENT1 and $HG_PARENT2.
204 204 tag;;
205 205 Run after a tag is created. ID of tagged changeset is in
206 206 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
207 207 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
208 208
209 209 In earlier releases, the names of hook environment variables did not
210 210 have a "HG_" prefix. These unprefixed names are still provided in
211 211 the environment for backwards compatibility, but their use is
212 212 deprecated, and they will be removed in a future release.
213 213
214 214 http_proxy::
215 215 Used to access web-based Mercurial repositories through a HTTP
216 216 proxy.
217 217 host;;
218 218 Host name and (optional) port of the proxy server, for example
219 219 "myproxy:8000".
220 220 no;;
221 221 Optional. Comma-separated list of host names that should bypass
222 222 the proxy.
223 223 passwd;;
224 224 Optional. Password to authenticate with at the proxy server.
225 225 user;;
226 226 Optional. User name to authenticate with at the proxy server.
227 227
228 228 paths::
229 229 Assigns symbolic names to repositories. The left side is the
230 230 symbolic name, and the right gives the directory or URL that is the
231 231 location of the repository.
232 232
233 233 ui::
234 234 User interface controls.
235 235 debug;;
236 236 Print debugging information. True or False. Default is False.
237 237 editor;;
238 238 The editor to use during a commit. Default is $EDITOR or "vi".
239 239 interactive;;
240 240 Allow to prompt the user. True or False. Default is True.
241 241 merge;;
242 242 The conflict resolution program to use during a manual merge.
243 243 Default is "hgmerge".
244 244 quiet;;
245 245 Reduce the amount of output printed. True or False. Default is False.
246 246 remotecmd;;
247 247 remote command to use for clone/push/pull operations. Default is 'hg'.
248 248 ssh;;
249 249 command to use for SSH connections. Default is 'ssh'.
250 250 timeout;;
251 251 The timeout used when a lock is held (in seconds), a negative value
252 means no timeout. Default is 1024.
252 means no timeout. Default is 600.
253 253 username;;
254 254 The committer of a changeset created when running "commit".
255 255 Typically a person's name and email address, e.g. "Fred Widget
256 256 <fred@example.com>". Default is $EMAIL or username@hostname.
257 257 verbose;;
258 258 Increase the amount of output printed. True or False. Default is False.
259 259
260 260
261 261 web::
262 262 Web interface configuration.
263 263 accesslog;;
264 264 Where to output the access log. Default is stdout.
265 265 address;;
266 266 Interface address to bind to. Default is all.
267 267 allowbz2;;
268 268 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
269 269 allowgz;;
270 270 Whether to allow .tar.gz downloading of repo revisions. Default is false.
271 271 allowpull;;
272 272 Whether to allow pulling from the repository. Default is true.
273 273 allowzip;;
274 274 Whether to allow .zip downloading of repo revisions. Default is false.
275 275 This feature creates temporary files.
276 276 description;;
277 277 Textual description of the repository's purpose or contents.
278 278 Default is "unknown".
279 279 errorlog;;
280 280 Where to output the error log. Default is stderr.
281 281 ipv6;;
282 282 Whether to use IPv6. Default is false.
283 283 name;;
284 284 Repository name to use in the web interface. Default is current
285 285 working directory.
286 286 maxchanges;;
287 287 Maximum number of changes to list on the changelog. Default is 10.
288 288 maxfiles;;
289 289 Maximum number of files to list per changeset. Default is 10.
290 290 port;;
291 291 Port to listen on. Default is 8000.
292 292 style;;
293 293 Which template map style to use.
294 294 templates;;
295 295 Where to find the HTML templates. Default is install path.
296 296
297 297
298 298 AUTHOR
299 299 ------
300 300 Bryan O'Sullivan <bos@serpentine.com>.
301 301
302 302 Mercurial was written by Matt Mackall <mpm@selenic.com>.
303 303
304 304 SEE ALSO
305 305 --------
306 306 hg(1)
307 307
308 308 COPYING
309 309 -------
310 310 This manual page is copyright 2005 Bryan O'Sullivan.
311 311 Mercurial is copyright 2005 Matt Mackall.
312 312 Free use of this software is granted under the terms of the GNU General
313 313 Public License (GPL).
@@ -1,1868 +1,1868 b''
1 1 # localrepo.py - read/write repository class for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 import struct, os, util
9 9 import filelog, manifest, changelog, dirstate, repo
10 10 from node import *
11 11 from i18n import gettext as _
12 12 from demandload import *
13 13 demandload(globals(), "re lock transaction tempfile stat mdiff errno")
14 14
15 15 class localrepository(object):
16 16 def __init__(self, ui, path=None, create=0):
17 17 if not path:
18 18 p = os.getcwd()
19 19 while not os.path.isdir(os.path.join(p, ".hg")):
20 20 oldp = p
21 21 p = os.path.dirname(p)
22 22 if p == oldp:
23 23 raise repo.RepoError(_("no repo found"))
24 24 path = p
25 25 self.path = os.path.join(path, ".hg")
26 26
27 27 if not create and not os.path.isdir(self.path):
28 28 raise repo.RepoError(_("repository %s not found") % path)
29 29
30 30 self.root = os.path.abspath(path)
31 31 self.ui = ui
32 32 self.opener = util.opener(self.path)
33 33 self.wopener = util.opener(self.root)
34 34 self.manifest = manifest.manifest(self.opener)
35 35 self.changelog = changelog.changelog(self.opener)
36 36 self.tagscache = None
37 37 self.nodetagscache = None
38 38 self.encodepats = None
39 39 self.decodepats = None
40 40
41 41 if create:
42 42 os.mkdir(self.path)
43 43 os.mkdir(self.join("data"))
44 44
45 45 self.dirstate = dirstate.dirstate(self.opener, ui, self.root)
46 46 try:
47 47 self.ui.readconfig(self.join("hgrc"))
48 48 except IOError:
49 49 pass
50 50
51 51 def hook(self, name, throw=False, **args):
52 52 def runhook(name, cmd):
53 53 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
54 54 old = {}
55 55 for k, v in args.items():
56 56 k = k.upper()
57 57 old['HG_' + k] = os.environ.get(k, None)
58 58 old[k] = os.environ.get(k, None)
59 59 os.environ['HG_' + k] = str(v)
60 60 os.environ[k] = str(v)
61 61
62 62 try:
63 63 # Hooks run in the repository root
64 64 olddir = os.getcwd()
65 65 os.chdir(self.root)
66 66 r = os.system(cmd)
67 67 finally:
68 68 for k, v in old.items():
69 69 if v is not None:
70 70 os.environ[k] = v
71 71 else:
72 72 del os.environ[k]
73 73
74 74 os.chdir(olddir)
75 75
76 76 if r:
77 77 desc, r = util.explain_exit(r)
78 78 if throw:
79 79 raise util.Abort(_('%s hook %s') % (name, desc))
80 80 self.ui.warn(_('error: %s hook %s\n') % (name, desc))
81 81 return False
82 82 return True
83 83
84 84 r = True
85 85 for hname, cmd in self.ui.configitems("hooks"):
86 86 s = hname.split(".")
87 87 if s[0] == name and cmd:
88 88 r = runhook(hname, cmd) and r
89 89 return r
90 90
91 91 def tags(self):
92 92 '''return a mapping of tag to node'''
93 93 if not self.tagscache:
94 94 self.tagscache = {}
95 95 def addtag(self, k, n):
96 96 try:
97 97 bin_n = bin(n)
98 98 except TypeError:
99 99 bin_n = ''
100 100 self.tagscache[k.strip()] = bin_n
101 101
102 102 try:
103 103 # read each head of the tags file, ending with the tip
104 104 # and add each tag found to the map, with "newer" ones
105 105 # taking precedence
106 106 fl = self.file(".hgtags")
107 107 h = fl.heads()
108 108 h.reverse()
109 109 for r in h:
110 110 for l in fl.read(r).splitlines():
111 111 if l:
112 112 n, k = l.split(" ", 1)
113 113 addtag(self, k, n)
114 114 except KeyError:
115 115 pass
116 116
117 117 try:
118 118 f = self.opener("localtags")
119 119 for l in f:
120 120 n, k = l.split(" ", 1)
121 121 addtag(self, k, n)
122 122 except IOError:
123 123 pass
124 124
125 125 self.tagscache['tip'] = self.changelog.tip()
126 126
127 127 return self.tagscache
128 128
129 129 def tagslist(self):
130 130 '''return a list of tags ordered by revision'''
131 131 l = []
132 132 for t, n in self.tags().items():
133 133 try:
134 134 r = self.changelog.rev(n)
135 135 except:
136 136 r = -2 # sort to the beginning of the list if unknown
137 137 l.append((r, t, n))
138 138 l.sort()
139 139 return [(t, n) for r, t, n in l]
140 140
141 141 def nodetags(self, node):
142 142 '''return the tags associated with a node'''
143 143 if not self.nodetagscache:
144 144 self.nodetagscache = {}
145 145 for t, n in self.tags().items():
146 146 self.nodetagscache.setdefault(n, []).append(t)
147 147 return self.nodetagscache.get(node, [])
148 148
149 149 def lookup(self, key):
150 150 try:
151 151 return self.tags()[key]
152 152 except KeyError:
153 153 try:
154 154 return self.changelog.lookup(key)
155 155 except:
156 156 raise repo.RepoError(_("unknown revision '%s'") % key)
157 157
158 158 def dev(self):
159 159 return os.stat(self.path).st_dev
160 160
161 161 def local(self):
162 162 return True
163 163
164 164 def join(self, f):
165 165 return os.path.join(self.path, f)
166 166
167 167 def wjoin(self, f):
168 168 return os.path.join(self.root, f)
169 169
170 170 def file(self, f):
171 171 if f[0] == '/':
172 172 f = f[1:]
173 173 return filelog.filelog(self.opener, f)
174 174
175 175 def getcwd(self):
176 176 return self.dirstate.getcwd()
177 177
178 178 def wfile(self, f, mode='r'):
179 179 return self.wopener(f, mode)
180 180
181 181 def wread(self, filename):
182 182 if self.encodepats == None:
183 183 l = []
184 184 for pat, cmd in self.ui.configitems("encode"):
185 185 mf = util.matcher("", "/", [pat], [], [])[1]
186 186 l.append((mf, cmd))
187 187 self.encodepats = l
188 188
189 189 data = self.wopener(filename, 'r').read()
190 190
191 191 for mf, cmd in self.encodepats:
192 192 if mf(filename):
193 193 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
194 194 data = util.filter(data, cmd)
195 195 break
196 196
197 197 return data
198 198
199 199 def wwrite(self, filename, data, fd=None):
200 200 if self.decodepats == None:
201 201 l = []
202 202 for pat, cmd in self.ui.configitems("decode"):
203 203 mf = util.matcher("", "/", [pat], [], [])[1]
204 204 l.append((mf, cmd))
205 205 self.decodepats = l
206 206
207 207 for mf, cmd in self.decodepats:
208 208 if mf(filename):
209 209 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
210 210 data = util.filter(data, cmd)
211 211 break
212 212
213 213 if fd:
214 214 return fd.write(data)
215 215 return self.wopener(filename, 'w').write(data)
216 216
217 217 def transaction(self):
218 218 # save dirstate for undo
219 219 try:
220 220 ds = self.opener("dirstate").read()
221 221 except IOError:
222 222 ds = ""
223 223 self.opener("journal.dirstate", "w").write(ds)
224 224
225 225 def after():
226 226 util.rename(self.join("journal"), self.join("undo"))
227 227 util.rename(self.join("journal.dirstate"),
228 228 self.join("undo.dirstate"))
229 229
230 230 return transaction.transaction(self.ui.warn, self.opener,
231 231 self.join("journal"), after)
232 232
233 233 def recover(self):
234 234 l = self.lock()
235 235 if os.path.exists(self.join("journal")):
236 236 self.ui.status(_("rolling back interrupted transaction\n"))
237 237 transaction.rollback(self.opener, self.join("journal"))
238 238 self.manifest = manifest.manifest(self.opener)
239 239 self.changelog = changelog.changelog(self.opener)
240 240 return True
241 241 else:
242 242 self.ui.warn(_("no interrupted transaction available\n"))
243 243 return False
244 244
245 245 def undo(self, wlock=None):
246 246 if not wlock:
247 247 wlock = self.wlock()
248 248 l = self.lock()
249 249 if os.path.exists(self.join("undo")):
250 250 self.ui.status(_("rolling back last transaction\n"))
251 251 transaction.rollback(self.opener, self.join("undo"))
252 252 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
253 253 self.dirstate.read()
254 254 else:
255 255 self.ui.warn(_("no undo information available\n"))
256 256
257 257 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None):
258 258 try:
259 259 l = lock.lock(self.join(lockname), 0, releasefn)
260 260 except lock.LockHeld, inst:
261 261 if not wait:
262 262 raise inst
263 263 self.ui.warn(_("waiting for lock held by %s\n") % inst.args[0])
264 264 try:
265 # default to 1024 seconds timeout
265 # default to 600 seconds timeout
266 266 l = lock.lock(self.join(lockname),
267 int(self.ui.config("ui", "timeout") or 1024),
267 int(self.ui.config("ui", "timeout") or 600),
268 268 releasefn)
269 269 except lock.LockHeld, inst:
270 270 raise util.Abort(_("timeout while waiting for "
271 271 "lock held by %s") % inst.args[0])
272 272 if acquirefn:
273 273 acquirefn()
274 274 return l
275 275
276 276 def lock(self, wait=1):
277 277 return self.do_lock("lock", wait)
278 278
279 279 def wlock(self, wait=1):
280 280 return self.do_lock("wlock", wait,
281 281 self.dirstate.write,
282 282 self.dirstate.read)
283 283
284 284 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
285 285 "determine whether a new filenode is needed"
286 286 fp1 = manifest1.get(filename, nullid)
287 287 fp2 = manifest2.get(filename, nullid)
288 288
289 289 if fp2 != nullid:
290 290 # is one parent an ancestor of the other?
291 291 fpa = filelog.ancestor(fp1, fp2)
292 292 if fpa == fp1:
293 293 fp1, fp2 = fp2, nullid
294 294 elif fpa == fp2:
295 295 fp2 = nullid
296 296
297 297 # is the file unmodified from the parent? report existing entry
298 298 if fp2 == nullid and text == filelog.read(fp1):
299 299 return (fp1, None, None)
300 300
301 301 return (None, fp1, fp2)
302 302
303 303 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
304 304 orig_parent = self.dirstate.parents()[0] or nullid
305 305 p1 = p1 or self.dirstate.parents()[0] or nullid
306 306 p2 = p2 or self.dirstate.parents()[1] or nullid
307 307 c1 = self.changelog.read(p1)
308 308 c2 = self.changelog.read(p2)
309 309 m1 = self.manifest.read(c1[0])
310 310 mf1 = self.manifest.readflags(c1[0])
311 311 m2 = self.manifest.read(c2[0])
312 312 changed = []
313 313
314 314 if orig_parent == p1:
315 315 update_dirstate = 1
316 316 else:
317 317 update_dirstate = 0
318 318
319 319 if not wlock:
320 320 wlock = self.wlock()
321 321 l = self.lock()
322 322 tr = self.transaction()
323 323 mm = m1.copy()
324 324 mfm = mf1.copy()
325 325 linkrev = self.changelog.count()
326 326 for f in files:
327 327 try:
328 328 t = self.wread(f)
329 329 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
330 330 r = self.file(f)
331 331 mfm[f] = tm
332 332
333 333 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
334 334 if entry:
335 335 mm[f] = entry
336 336 continue
337 337
338 338 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
339 339 changed.append(f)
340 340 if update_dirstate:
341 341 self.dirstate.update([f], "n")
342 342 except IOError:
343 343 try:
344 344 del mm[f]
345 345 del mfm[f]
346 346 if update_dirstate:
347 347 self.dirstate.forget([f])
348 348 except:
349 349 # deleted from p2?
350 350 pass
351 351
352 352 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
353 353 user = user or self.ui.username()
354 354 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
355 355 tr.close()
356 356 if update_dirstate:
357 357 self.dirstate.setparents(n, nullid)
358 358
359 359 def commit(self, files=None, text="", user=None, date=None,
360 360 match=util.always, force=False, wlock=None):
361 361 commit = []
362 362 remove = []
363 363 changed = []
364 364
365 365 if files:
366 366 for f in files:
367 367 s = self.dirstate.state(f)
368 368 if s in 'nmai':
369 369 commit.append(f)
370 370 elif s == 'r':
371 371 remove.append(f)
372 372 else:
373 373 self.ui.warn(_("%s not tracked!\n") % f)
374 374 else:
375 375 modified, added, removed, deleted, unknown = self.changes(match=match)
376 376 commit = modified + added
377 377 remove = removed
378 378
379 379 p1, p2 = self.dirstate.parents()
380 380 c1 = self.changelog.read(p1)
381 381 c2 = self.changelog.read(p2)
382 382 m1 = self.manifest.read(c1[0])
383 383 mf1 = self.manifest.readflags(c1[0])
384 384 m2 = self.manifest.read(c2[0])
385 385
386 386 if not commit and not remove and not force and p2 == nullid:
387 387 self.ui.status(_("nothing changed\n"))
388 388 return None
389 389
390 390 xp1 = hex(p1)
391 391 if p2 == nullid: xp2 = ''
392 392 else: xp2 = hex(p2)
393 393
394 394 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
395 395
396 396 if not wlock:
397 397 wlock = self.wlock()
398 398 l = self.lock()
399 399 tr = self.transaction()
400 400
401 401 # check in files
402 402 new = {}
403 403 linkrev = self.changelog.count()
404 404 commit.sort()
405 405 for f in commit:
406 406 self.ui.note(f + "\n")
407 407 try:
408 408 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
409 409 t = self.wread(f)
410 410 except IOError:
411 411 self.ui.warn(_("trouble committing %s!\n") % f)
412 412 raise
413 413
414 414 r = self.file(f)
415 415
416 416 meta = {}
417 417 cp = self.dirstate.copied(f)
418 418 if cp:
419 419 meta["copy"] = cp
420 420 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
421 421 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
422 422 fp1, fp2 = nullid, nullid
423 423 else:
424 424 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
425 425 if entry:
426 426 new[f] = entry
427 427 continue
428 428
429 429 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
430 430 # remember what we've added so that we can later calculate
431 431 # the files to pull from a set of changesets
432 432 changed.append(f)
433 433
434 434 # update manifest
435 435 m1 = m1.copy()
436 436 m1.update(new)
437 437 for f in remove:
438 438 if f in m1:
439 439 del m1[f]
440 440 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
441 441 (new, remove))
442 442
443 443 # add changeset
444 444 new = new.keys()
445 445 new.sort()
446 446
447 447 if not text:
448 448 edittext = [""]
449 449 if p2 != nullid:
450 450 edittext.append("HG: branch merge")
451 451 edittext.extend(["HG: changed %s" % f for f in changed])
452 452 edittext.extend(["HG: removed %s" % f for f in remove])
453 453 if not changed and not remove:
454 454 edittext.append("HG: no files changed")
455 455 edittext.append("")
456 456 # run editor in the repository root
457 457 olddir = os.getcwd()
458 458 os.chdir(self.root)
459 459 edittext = self.ui.edit("\n".join(edittext))
460 460 os.chdir(olddir)
461 461 if not edittext.rstrip():
462 462 return None
463 463 text = edittext
464 464
465 465 user = user or self.ui.username()
466 466 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
467 467 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
468 468 parent2=xp2)
469 469 tr.close()
470 470
471 471 self.dirstate.setparents(n)
472 472 self.dirstate.update(new, "n")
473 473 self.dirstate.forget(remove)
474 474
475 475 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
476 476 return n
477 477
478 478 def walk(self, node=None, files=[], match=util.always):
479 479 if node:
480 480 fdict = dict.fromkeys(files)
481 481 for fn in self.manifest.read(self.changelog.read(node)[0]):
482 482 fdict.pop(fn, None)
483 483 if match(fn):
484 484 yield 'm', fn
485 485 for fn in fdict:
486 486 self.ui.warn(_('%s: No such file in rev %s\n') % (
487 487 util.pathto(self.getcwd(), fn), short(node)))
488 488 else:
489 489 for src, fn in self.dirstate.walk(files, match):
490 490 yield src, fn
491 491
492 492 def changes(self, node1=None, node2=None, files=[], match=util.always,
493 493 wlock=None):
494 494 """return changes between two nodes or node and working directory
495 495
496 496 If node1 is None, use the first dirstate parent instead.
497 497 If node2 is None, compare node1 with working directory.
498 498 """
499 499
500 500 def fcmp(fn, mf):
501 501 t1 = self.wread(fn)
502 502 t2 = self.file(fn).read(mf.get(fn, nullid))
503 503 return cmp(t1, t2)
504 504
505 505 def mfmatches(node):
506 506 change = self.changelog.read(node)
507 507 mf = dict(self.manifest.read(change[0]))
508 508 for fn in mf.keys():
509 509 if not match(fn):
510 510 del mf[fn]
511 511 return mf
512 512
513 513 # are we comparing the working directory?
514 514 if not node2:
515 515 if not wlock:
516 516 try:
517 517 wlock = self.wlock(wait=0)
518 518 except lock.LockException:
519 519 wlock = None
520 520 lookup, modified, added, removed, deleted, unknown = (
521 521 self.dirstate.changes(files, match))
522 522
523 523 # are we comparing working dir against its parent?
524 524 if not node1:
525 525 if lookup:
526 526 # do a full compare of any files that might have changed
527 527 mf2 = mfmatches(self.dirstate.parents()[0])
528 528 for f in lookup:
529 529 if fcmp(f, mf2):
530 530 modified.append(f)
531 531 elif wlock is not None:
532 532 self.dirstate.update([f], "n")
533 533 else:
534 534 # we are comparing working dir against non-parent
535 535 # generate a pseudo-manifest for the working dir
536 536 mf2 = mfmatches(self.dirstate.parents()[0])
537 537 for f in lookup + modified + added:
538 538 mf2[f] = ""
539 539 for f in removed:
540 540 if f in mf2:
541 541 del mf2[f]
542 542 else:
543 543 # we are comparing two revisions
544 544 deleted, unknown = [], []
545 545 mf2 = mfmatches(node2)
546 546
547 547 if node1:
548 548 # flush lists from dirstate before comparing manifests
549 549 modified, added = [], []
550 550
551 551 mf1 = mfmatches(node1)
552 552
553 553 for fn in mf2:
554 554 if mf1.has_key(fn):
555 555 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
556 556 modified.append(fn)
557 557 del mf1[fn]
558 558 else:
559 559 added.append(fn)
560 560
561 561 removed = mf1.keys()
562 562
563 563 # sort and return results:
564 564 for l in modified, added, removed, deleted, unknown:
565 565 l.sort()
566 566 return (modified, added, removed, deleted, unknown)
567 567
568 568 def add(self, list, wlock=None):
569 569 if not wlock:
570 570 wlock = self.wlock()
571 571 for f in list:
572 572 p = self.wjoin(f)
573 573 if not os.path.exists(p):
574 574 self.ui.warn(_("%s does not exist!\n") % f)
575 575 elif not os.path.isfile(p):
576 576 self.ui.warn(_("%s not added: only files supported currently\n")
577 577 % f)
578 578 elif self.dirstate.state(f) in 'an':
579 579 self.ui.warn(_("%s already tracked!\n") % f)
580 580 else:
581 581 self.dirstate.update([f], "a")
582 582
583 583 def forget(self, list, wlock=None):
584 584 if not wlock:
585 585 wlock = self.wlock()
586 586 for f in list:
587 587 if self.dirstate.state(f) not in 'ai':
588 588 self.ui.warn(_("%s not added!\n") % f)
589 589 else:
590 590 self.dirstate.forget([f])
591 591
592 592 def remove(self, list, unlink=False, wlock=None):
593 593 if unlink:
594 594 for f in list:
595 595 try:
596 596 util.unlink(self.wjoin(f))
597 597 except OSError, inst:
598 598 if inst.errno != errno.ENOENT:
599 599 raise
600 600 if not wlock:
601 601 wlock = self.wlock()
602 602 for f in list:
603 603 p = self.wjoin(f)
604 604 if os.path.exists(p):
605 605 self.ui.warn(_("%s still exists!\n") % f)
606 606 elif self.dirstate.state(f) == 'a':
607 607 self.dirstate.forget([f])
608 608 elif f not in self.dirstate:
609 609 self.ui.warn(_("%s not tracked!\n") % f)
610 610 else:
611 611 self.dirstate.update([f], "r")
612 612
613 613 def undelete(self, list, wlock=None):
614 614 p = self.dirstate.parents()[0]
615 615 mn = self.changelog.read(p)[0]
616 616 mf = self.manifest.readflags(mn)
617 617 m = self.manifest.read(mn)
618 618 if not wlock:
619 619 wlock = self.wlock()
620 620 for f in list:
621 621 if self.dirstate.state(f) not in "r":
622 622 self.ui.warn("%s not removed!\n" % f)
623 623 else:
624 624 t = self.file(f).read(m[f])
625 625 self.wwrite(f, t)
626 626 util.set_exec(self.wjoin(f), mf[f])
627 627 self.dirstate.update([f], "n")
628 628
629 629 def copy(self, source, dest, wlock=None):
630 630 p = self.wjoin(dest)
631 631 if not os.path.exists(p):
632 632 self.ui.warn(_("%s does not exist!\n") % dest)
633 633 elif not os.path.isfile(p):
634 634 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
635 635 else:
636 636 if not wlock:
637 637 wlock = self.wlock()
638 638 if self.dirstate.state(dest) == '?':
639 639 self.dirstate.update([dest], "a")
640 640 self.dirstate.copy(source, dest)
641 641
642 642 def heads(self, start=None):
643 643 heads = self.changelog.heads(start)
644 644 # sort the output in rev descending order
645 645 heads = [(-self.changelog.rev(h), h) for h in heads]
646 646 heads.sort()
647 647 return [n for (r, n) in heads]
648 648
649 649 # branchlookup returns a dict giving a list of branches for
650 650 # each head. A branch is defined as the tag of a node or
651 651 # the branch of the node's parents. If a node has multiple
652 652 # branch tags, tags are eliminated if they are visible from other
653 653 # branch tags.
654 654 #
655 655 # So, for this graph: a->b->c->d->e
656 656 # \ /
657 657 # aa -----/
658 658 # a has tag 2.6.12
659 659 # d has tag 2.6.13
660 660 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
661 661 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
662 662 # from the list.
663 663 #
664 664 # It is possible that more than one head will have the same branch tag.
665 665 # callers need to check the result for multiple heads under the same
666 666 # branch tag if that is a problem for them (ie checkout of a specific
667 667 # branch).
668 668 #
669 669 # passing in a specific branch will limit the depth of the search
670 670 # through the parents. It won't limit the branches returned in the
671 671 # result though.
672 672 def branchlookup(self, heads=None, branch=None):
673 673 if not heads:
674 674 heads = self.heads()
675 675 headt = [ h for h in heads ]
676 676 chlog = self.changelog
677 677 branches = {}
678 678 merges = []
679 679 seenmerge = {}
680 680
681 681 # traverse the tree once for each head, recording in the branches
682 682 # dict which tags are visible from this head. The branches
683 683 # dict also records which tags are visible from each tag
684 684 # while we traverse.
685 685 while headt or merges:
686 686 if merges:
687 687 n, found = merges.pop()
688 688 visit = [n]
689 689 else:
690 690 h = headt.pop()
691 691 visit = [h]
692 692 found = [h]
693 693 seen = {}
694 694 while visit:
695 695 n = visit.pop()
696 696 if n in seen:
697 697 continue
698 698 pp = chlog.parents(n)
699 699 tags = self.nodetags(n)
700 700 if tags:
701 701 for x in tags:
702 702 if x == 'tip':
703 703 continue
704 704 for f in found:
705 705 branches.setdefault(f, {})[n] = 1
706 706 branches.setdefault(n, {})[n] = 1
707 707 break
708 708 if n not in found:
709 709 found.append(n)
710 710 if branch in tags:
711 711 continue
712 712 seen[n] = 1
713 713 if pp[1] != nullid and n not in seenmerge:
714 714 merges.append((pp[1], [x for x in found]))
715 715 seenmerge[n] = 1
716 716 if pp[0] != nullid:
717 717 visit.append(pp[0])
718 718 # traverse the branches dict, eliminating branch tags from each
719 719 # head that are visible from another branch tag for that head.
720 720 out = {}
721 721 viscache = {}
722 722 for h in heads:
723 723 def visible(node):
724 724 if node in viscache:
725 725 return viscache[node]
726 726 ret = {}
727 727 visit = [node]
728 728 while visit:
729 729 x = visit.pop()
730 730 if x in viscache:
731 731 ret.update(viscache[x])
732 732 elif x not in ret:
733 733 ret[x] = 1
734 734 if x in branches:
735 735 visit[len(visit):] = branches[x].keys()
736 736 viscache[node] = ret
737 737 return ret
738 738 if h not in branches:
739 739 continue
740 740 # O(n^2), but somewhat limited. This only searches the
741 741 # tags visible from a specific head, not all the tags in the
742 742 # whole repo.
743 743 for b in branches[h]:
744 744 vis = False
745 745 for bb in branches[h].keys():
746 746 if b != bb:
747 747 if b in visible(bb):
748 748 vis = True
749 749 break
750 750 if not vis:
751 751 l = out.setdefault(h, [])
752 752 l[len(l):] = self.nodetags(b)
753 753 return out
754 754
755 755 def branches(self, nodes):
756 756 if not nodes:
757 757 nodes = [self.changelog.tip()]
758 758 b = []
759 759 for n in nodes:
760 760 t = n
761 761 while n:
762 762 p = self.changelog.parents(n)
763 763 if p[1] != nullid or p[0] == nullid:
764 764 b.append((t, n, p[0], p[1]))
765 765 break
766 766 n = p[0]
767 767 return b
768 768
769 769 def between(self, pairs):
770 770 r = []
771 771
772 772 for top, bottom in pairs:
773 773 n, l, i = top, [], 0
774 774 f = 1
775 775
776 776 while n != bottom:
777 777 p = self.changelog.parents(n)[0]
778 778 if i == f:
779 779 l.append(n)
780 780 f = f * 2
781 781 n = p
782 782 i += 1
783 783
784 784 r.append(l)
785 785
786 786 return r
787 787
788 788 def findincoming(self, remote, base=None, heads=None):
789 789 m = self.changelog.nodemap
790 790 search = []
791 791 fetch = {}
792 792 seen = {}
793 793 seenbranch = {}
794 794 if base == None:
795 795 base = {}
796 796
797 797 # assume we're closer to the tip than the root
798 798 # and start by examining the heads
799 799 self.ui.status(_("searching for changes\n"))
800 800
801 801 if not heads:
802 802 heads = remote.heads()
803 803
804 804 unknown = []
805 805 for h in heads:
806 806 if h not in m:
807 807 unknown.append(h)
808 808 else:
809 809 base[h] = 1
810 810
811 811 if not unknown:
812 812 return None
813 813
814 814 rep = {}
815 815 reqcnt = 0
816 816
817 817 # search through remote branches
818 818 # a 'branch' here is a linear segment of history, with four parts:
819 819 # head, root, first parent, second parent
820 820 # (a branch always has two parents (or none) by definition)
821 821 unknown = remote.branches(unknown)
822 822 while unknown:
823 823 r = []
824 824 while unknown:
825 825 n = unknown.pop(0)
826 826 if n[0] in seen:
827 827 continue
828 828
829 829 self.ui.debug(_("examining %s:%s\n")
830 830 % (short(n[0]), short(n[1])))
831 831 if n[0] == nullid:
832 832 break
833 833 if n in seenbranch:
834 834 self.ui.debug(_("branch already found\n"))
835 835 continue
836 836 if n[1] and n[1] in m: # do we know the base?
837 837 self.ui.debug(_("found incomplete branch %s:%s\n")
838 838 % (short(n[0]), short(n[1])))
839 839 search.append(n) # schedule branch range for scanning
840 840 seenbranch[n] = 1
841 841 else:
842 842 if n[1] not in seen and n[1] not in fetch:
843 843 if n[2] in m and n[3] in m:
844 844 self.ui.debug(_("found new changeset %s\n") %
845 845 short(n[1]))
846 846 fetch[n[1]] = 1 # earliest unknown
847 847 base[n[2]] = 1 # latest known
848 848 continue
849 849
850 850 for a in n[2:4]:
851 851 if a not in rep:
852 852 r.append(a)
853 853 rep[a] = 1
854 854
855 855 seen[n[0]] = 1
856 856
857 857 if r:
858 858 reqcnt += 1
859 859 self.ui.debug(_("request %d: %s\n") %
860 860 (reqcnt, " ".join(map(short, r))))
861 861 for p in range(0, len(r), 10):
862 862 for b in remote.branches(r[p:p+10]):
863 863 self.ui.debug(_("received %s:%s\n") %
864 864 (short(b[0]), short(b[1])))
865 865 if b[0] in m:
866 866 self.ui.debug(_("found base node %s\n")
867 867 % short(b[0]))
868 868 base[b[0]] = 1
869 869 elif b[0] not in seen:
870 870 unknown.append(b)
871 871
872 872 # do binary search on the branches we found
873 873 while search:
874 874 n = search.pop(0)
875 875 reqcnt += 1
876 876 l = remote.between([(n[0], n[1])])[0]
877 877 l.append(n[1])
878 878 p = n[0]
879 879 f = 1
880 880 for i in l:
881 881 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
882 882 if i in m:
883 883 if f <= 2:
884 884 self.ui.debug(_("found new branch changeset %s\n") %
885 885 short(p))
886 886 fetch[p] = 1
887 887 base[i] = 1
888 888 else:
889 889 self.ui.debug(_("narrowed branch search to %s:%s\n")
890 890 % (short(p), short(i)))
891 891 search.append((p, i))
892 892 break
893 893 p, f = i, f * 2
894 894
895 895 # sanity check our fetch list
896 896 for f in fetch.keys():
897 897 if f in m:
898 898 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
899 899
900 900 if base.keys() == [nullid]:
901 901 self.ui.warn(_("warning: pulling from an unrelated repository!\n"))
902 902
903 903 self.ui.note(_("found new changesets starting at ") +
904 904 " ".join([short(f) for f in fetch]) + "\n")
905 905
906 906 self.ui.debug(_("%d total queries\n") % reqcnt)
907 907
908 908 return fetch.keys()
909 909
910 910 def findoutgoing(self, remote, base=None, heads=None):
911 911 if base == None:
912 912 base = {}
913 913 self.findincoming(remote, base, heads)
914 914
915 915 self.ui.debug(_("common changesets up to ")
916 916 + " ".join(map(short, base.keys())) + "\n")
917 917
918 918 remain = dict.fromkeys(self.changelog.nodemap)
919 919
920 920 # prune everything remote has from the tree
921 921 del remain[nullid]
922 922 remove = base.keys()
923 923 while remove:
924 924 n = remove.pop(0)
925 925 if n in remain:
926 926 del remain[n]
927 927 for p in self.changelog.parents(n):
928 928 remove.append(p)
929 929
930 930 # find every node whose parents have been pruned
931 931 subset = []
932 932 for n in remain:
933 933 p1, p2 = self.changelog.parents(n)
934 934 if p1 not in remain and p2 not in remain:
935 935 subset.append(n)
936 936
937 937 # this is the set of all roots we have to push
938 938 return subset
939 939
940 940 def pull(self, remote, heads=None):
941 941 l = self.lock()
942 942
943 943 # if we have an empty repo, fetch everything
944 944 if self.changelog.tip() == nullid:
945 945 self.ui.status(_("requesting all changes\n"))
946 946 fetch = [nullid]
947 947 else:
948 948 fetch = self.findincoming(remote)
949 949
950 950 if not fetch:
951 951 self.ui.status(_("no changes found\n"))
952 952 return 1
953 953
954 954 if heads is None:
955 955 cg = remote.changegroup(fetch, 'pull')
956 956 else:
957 957 cg = remote.changegroupsubset(fetch, heads, 'pull')
958 958 return self.addchangegroup(cg)
959 959
960 960 def push(self, remote, force=False, revs=None):
961 961 lock = remote.lock()
962 962
963 963 base = {}
964 964 heads = remote.heads()
965 965 inc = self.findincoming(remote, base, heads)
966 966 if not force and inc:
967 967 self.ui.warn(_("abort: unsynced remote changes!\n"))
968 968 self.ui.status(_("(did you forget to sync? use push -f to force)\n"))
969 969 return 1
970 970
971 971 update = self.findoutgoing(remote, base)
972 972 if revs is not None:
973 973 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
974 974 else:
975 975 bases, heads = update, self.changelog.heads()
976 976
977 977 if not bases:
978 978 self.ui.status(_("no changes found\n"))
979 979 return 1
980 980 elif not force:
981 981 if len(bases) < len(heads):
982 982 self.ui.warn(_("abort: push creates new remote branches!\n"))
983 983 self.ui.status(_("(did you forget to merge?"
984 984 " use push -f to force)\n"))
985 985 return 1
986 986
987 987 if revs is None:
988 988 cg = self.changegroup(update, 'push')
989 989 else:
990 990 cg = self.changegroupsubset(update, revs, 'push')
991 991 return remote.addchangegroup(cg)
992 992
993 993 def changegroupsubset(self, bases, heads, source):
994 994 """This function generates a changegroup consisting of all the nodes
995 995 that are descendents of any of the bases, and ancestors of any of
996 996 the heads.
997 997
998 998 It is fairly complex as determining which filenodes and which
999 999 manifest nodes need to be included for the changeset to be complete
1000 1000 is non-trivial.
1001 1001
1002 1002 Another wrinkle is doing the reverse, figuring out which changeset in
1003 1003 the changegroup a particular filenode or manifestnode belongs to."""
1004 1004
1005 1005 self.hook('preoutgoing', throw=True, source=source)
1006 1006
1007 1007 # Set up some initial variables
1008 1008 # Make it easy to refer to self.changelog
1009 1009 cl = self.changelog
1010 1010 # msng is short for missing - compute the list of changesets in this
1011 1011 # changegroup.
1012 1012 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1013 1013 # Some bases may turn out to be superfluous, and some heads may be
1014 1014 # too. nodesbetween will return the minimal set of bases and heads
1015 1015 # necessary to re-create the changegroup.
1016 1016
1017 1017 # Known heads are the list of heads that it is assumed the recipient
1018 1018 # of this changegroup will know about.
1019 1019 knownheads = {}
1020 1020 # We assume that all parents of bases are known heads.
1021 1021 for n in bases:
1022 1022 for p in cl.parents(n):
1023 1023 if p != nullid:
1024 1024 knownheads[p] = 1
1025 1025 knownheads = knownheads.keys()
1026 1026 if knownheads:
1027 1027 # Now that we know what heads are known, we can compute which
1028 1028 # changesets are known. The recipient must know about all
1029 1029 # changesets required to reach the known heads from the null
1030 1030 # changeset.
1031 1031 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1032 1032 junk = None
1033 1033 # Transform the list into an ersatz set.
1034 1034 has_cl_set = dict.fromkeys(has_cl_set)
1035 1035 else:
1036 1036 # If there were no known heads, the recipient cannot be assumed to
1037 1037 # know about any changesets.
1038 1038 has_cl_set = {}
1039 1039
1040 1040 # Make it easy to refer to self.manifest
1041 1041 mnfst = self.manifest
1042 1042 # We don't know which manifests are missing yet
1043 1043 msng_mnfst_set = {}
1044 1044 # Nor do we know which filenodes are missing.
1045 1045 msng_filenode_set = {}
1046 1046
1047 1047 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1048 1048 junk = None
1049 1049
1050 1050 # A changeset always belongs to itself, so the changenode lookup
1051 1051 # function for a changenode is identity.
1052 1052 def identity(x):
1053 1053 return x
1054 1054
1055 1055 # A function generating function. Sets up an environment for the
1056 1056 # inner function.
1057 1057 def cmp_by_rev_func(revlog):
1058 1058 # Compare two nodes by their revision number in the environment's
1059 1059 # revision history. Since the revision number both represents the
1060 1060 # most efficient order to read the nodes in, and represents a
1061 1061 # topological sorting of the nodes, this function is often useful.
1062 1062 def cmp_by_rev(a, b):
1063 1063 return cmp(revlog.rev(a), revlog.rev(b))
1064 1064 return cmp_by_rev
1065 1065
1066 1066 # If we determine that a particular file or manifest node must be a
1067 1067 # node that the recipient of the changegroup will already have, we can
1068 1068 # also assume the recipient will have all the parents. This function
1069 1069 # prunes them from the set of missing nodes.
1070 1070 def prune_parents(revlog, hasset, msngset):
1071 1071 haslst = hasset.keys()
1072 1072 haslst.sort(cmp_by_rev_func(revlog))
1073 1073 for node in haslst:
1074 1074 parentlst = [p for p in revlog.parents(node) if p != nullid]
1075 1075 while parentlst:
1076 1076 n = parentlst.pop()
1077 1077 if n not in hasset:
1078 1078 hasset[n] = 1
1079 1079 p = [p for p in revlog.parents(n) if p != nullid]
1080 1080 parentlst.extend(p)
1081 1081 for n in hasset:
1082 1082 msngset.pop(n, None)
1083 1083
1084 1084 # This is a function generating function used to set up an environment
1085 1085 # for the inner function to execute in.
1086 1086 def manifest_and_file_collector(changedfileset):
1087 1087 # This is an information gathering function that gathers
1088 1088 # information from each changeset node that goes out as part of
1089 1089 # the changegroup. The information gathered is a list of which
1090 1090 # manifest nodes are potentially required (the recipient may
1091 1091 # already have them) and total list of all files which were
1092 1092 # changed in any changeset in the changegroup.
1093 1093 #
1094 1094 # We also remember the first changenode we saw any manifest
1095 1095 # referenced by so we can later determine which changenode 'owns'
1096 1096 # the manifest.
1097 1097 def collect_manifests_and_files(clnode):
1098 1098 c = cl.read(clnode)
1099 1099 for f in c[3]:
1100 1100 # This is to make sure we only have one instance of each
1101 1101 # filename string for each filename.
1102 1102 changedfileset.setdefault(f, f)
1103 1103 msng_mnfst_set.setdefault(c[0], clnode)
1104 1104 return collect_manifests_and_files
1105 1105
1106 1106 # Figure out which manifest nodes (of the ones we think might be part
1107 1107 # of the changegroup) the recipient must know about and remove them
1108 1108 # from the changegroup.
1109 1109 def prune_manifests():
1110 1110 has_mnfst_set = {}
1111 1111 for n in msng_mnfst_set:
1112 1112 # If a 'missing' manifest thinks it belongs to a changenode
1113 1113 # the recipient is assumed to have, obviously the recipient
1114 1114 # must have that manifest.
1115 1115 linknode = cl.node(mnfst.linkrev(n))
1116 1116 if linknode in has_cl_set:
1117 1117 has_mnfst_set[n] = 1
1118 1118 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1119 1119
1120 1120 # Use the information collected in collect_manifests_and_files to say
1121 1121 # which changenode any manifestnode belongs to.
1122 1122 def lookup_manifest_link(mnfstnode):
1123 1123 return msng_mnfst_set[mnfstnode]
1124 1124
1125 1125 # A function generating function that sets up the initial environment
1126 1126 # the inner function.
1127 1127 def filenode_collector(changedfiles):
1128 1128 next_rev = [0]
1129 1129 # This gathers information from each manifestnode included in the
1130 1130 # changegroup about which filenodes the manifest node references
1131 1131 # so we can include those in the changegroup too.
1132 1132 #
1133 1133 # It also remembers which changenode each filenode belongs to. It
1134 1134 # does this by assuming the a filenode belongs to the changenode
1135 1135 # the first manifest that references it belongs to.
1136 1136 def collect_msng_filenodes(mnfstnode):
1137 1137 r = mnfst.rev(mnfstnode)
1138 1138 if r == next_rev[0]:
1139 1139 # If the last rev we looked at was the one just previous,
1140 1140 # we only need to see a diff.
1141 1141 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1142 1142 # For each line in the delta
1143 1143 for dline in delta.splitlines():
1144 1144 # get the filename and filenode for that line
1145 1145 f, fnode = dline.split('\0')
1146 1146 fnode = bin(fnode[:40])
1147 1147 f = changedfiles.get(f, None)
1148 1148 # And if the file is in the list of files we care
1149 1149 # about.
1150 1150 if f is not None:
1151 1151 # Get the changenode this manifest belongs to
1152 1152 clnode = msng_mnfst_set[mnfstnode]
1153 1153 # Create the set of filenodes for the file if
1154 1154 # there isn't one already.
1155 1155 ndset = msng_filenode_set.setdefault(f, {})
1156 1156 # And set the filenode's changelog node to the
1157 1157 # manifest's if it hasn't been set already.
1158 1158 ndset.setdefault(fnode, clnode)
1159 1159 else:
1160 1160 # Otherwise we need a full manifest.
1161 1161 m = mnfst.read(mnfstnode)
1162 1162 # For every file in we care about.
1163 1163 for f in changedfiles:
1164 1164 fnode = m.get(f, None)
1165 1165 # If it's in the manifest
1166 1166 if fnode is not None:
1167 1167 # See comments above.
1168 1168 clnode = msng_mnfst_set[mnfstnode]
1169 1169 ndset = msng_filenode_set.setdefault(f, {})
1170 1170 ndset.setdefault(fnode, clnode)
1171 1171 # Remember the revision we hope to see next.
1172 1172 next_rev[0] = r + 1
1173 1173 return collect_msng_filenodes
1174 1174
1175 1175 # We have a list of filenodes we think we need for a file, lets remove
1176 1176 # all those we now the recipient must have.
1177 1177 def prune_filenodes(f, filerevlog):
1178 1178 msngset = msng_filenode_set[f]
1179 1179 hasset = {}
1180 1180 # If a 'missing' filenode thinks it belongs to a changenode we
1181 1181 # assume the recipient must have, then the recipient must have
1182 1182 # that filenode.
1183 1183 for n in msngset:
1184 1184 clnode = cl.node(filerevlog.linkrev(n))
1185 1185 if clnode in has_cl_set:
1186 1186 hasset[n] = 1
1187 1187 prune_parents(filerevlog, hasset, msngset)
1188 1188
1189 1189 # A function generator function that sets up the a context for the
1190 1190 # inner function.
1191 1191 def lookup_filenode_link_func(fname):
1192 1192 msngset = msng_filenode_set[fname]
1193 1193 # Lookup the changenode the filenode belongs to.
1194 1194 def lookup_filenode_link(fnode):
1195 1195 return msngset[fnode]
1196 1196 return lookup_filenode_link
1197 1197
1198 1198 # Now that we have all theses utility functions to help out and
1199 1199 # logically divide up the task, generate the group.
1200 1200 def gengroup():
1201 1201 # The set of changed files starts empty.
1202 1202 changedfiles = {}
1203 1203 # Create a changenode group generator that will call our functions
1204 1204 # back to lookup the owning changenode and collect information.
1205 1205 group = cl.group(msng_cl_lst, identity,
1206 1206 manifest_and_file_collector(changedfiles))
1207 1207 for chnk in group:
1208 1208 yield chnk
1209 1209
1210 1210 # The list of manifests has been collected by the generator
1211 1211 # calling our functions back.
1212 1212 prune_manifests()
1213 1213 msng_mnfst_lst = msng_mnfst_set.keys()
1214 1214 # Sort the manifestnodes by revision number.
1215 1215 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1216 1216 # Create a generator for the manifestnodes that calls our lookup
1217 1217 # and data collection functions back.
1218 1218 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1219 1219 filenode_collector(changedfiles))
1220 1220 for chnk in group:
1221 1221 yield chnk
1222 1222
1223 1223 # These are no longer needed, dereference and toss the memory for
1224 1224 # them.
1225 1225 msng_mnfst_lst = None
1226 1226 msng_mnfst_set.clear()
1227 1227
1228 1228 changedfiles = changedfiles.keys()
1229 1229 changedfiles.sort()
1230 1230 # Go through all our files in order sorted by name.
1231 1231 for fname in changedfiles:
1232 1232 filerevlog = self.file(fname)
1233 1233 # Toss out the filenodes that the recipient isn't really
1234 1234 # missing.
1235 1235 if msng_filenode_set.has_key(fname):
1236 1236 prune_filenodes(fname, filerevlog)
1237 1237 msng_filenode_lst = msng_filenode_set[fname].keys()
1238 1238 else:
1239 1239 msng_filenode_lst = []
1240 1240 # If any filenodes are left, generate the group for them,
1241 1241 # otherwise don't bother.
1242 1242 if len(msng_filenode_lst) > 0:
1243 1243 yield struct.pack(">l", len(fname) + 4) + fname
1244 1244 # Sort the filenodes by their revision #
1245 1245 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1246 1246 # Create a group generator and only pass in a changenode
1247 1247 # lookup function as we need to collect no information
1248 1248 # from filenodes.
1249 1249 group = filerevlog.group(msng_filenode_lst,
1250 1250 lookup_filenode_link_func(fname))
1251 1251 for chnk in group:
1252 1252 yield chnk
1253 1253 if msng_filenode_set.has_key(fname):
1254 1254 # Don't need this anymore, toss it to free memory.
1255 1255 del msng_filenode_set[fname]
1256 1256 # Signal that no more groups are left.
1257 1257 yield struct.pack(">l", 0)
1258 1258
1259 1259 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1260 1260
1261 1261 return util.chunkbuffer(gengroup())
1262 1262
1263 1263 def changegroup(self, basenodes, source):
1264 1264 """Generate a changegroup of all nodes that we have that a recipient
1265 1265 doesn't.
1266 1266
1267 1267 This is much easier than the previous function as we can assume that
1268 1268 the recipient has any changenode we aren't sending them."""
1269 1269
1270 1270 self.hook('preoutgoing', throw=True, source=source)
1271 1271
1272 1272 cl = self.changelog
1273 1273 nodes = cl.nodesbetween(basenodes, None)[0]
1274 1274 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1275 1275
1276 1276 def identity(x):
1277 1277 return x
1278 1278
1279 1279 def gennodelst(revlog):
1280 1280 for r in xrange(0, revlog.count()):
1281 1281 n = revlog.node(r)
1282 1282 if revlog.linkrev(n) in revset:
1283 1283 yield n
1284 1284
1285 1285 def changed_file_collector(changedfileset):
1286 1286 def collect_changed_files(clnode):
1287 1287 c = cl.read(clnode)
1288 1288 for fname in c[3]:
1289 1289 changedfileset[fname] = 1
1290 1290 return collect_changed_files
1291 1291
1292 1292 def lookuprevlink_func(revlog):
1293 1293 def lookuprevlink(n):
1294 1294 return cl.node(revlog.linkrev(n))
1295 1295 return lookuprevlink
1296 1296
1297 1297 def gengroup():
1298 1298 # construct a list of all changed files
1299 1299 changedfiles = {}
1300 1300
1301 1301 for chnk in cl.group(nodes, identity,
1302 1302 changed_file_collector(changedfiles)):
1303 1303 yield chnk
1304 1304 changedfiles = changedfiles.keys()
1305 1305 changedfiles.sort()
1306 1306
1307 1307 mnfst = self.manifest
1308 1308 nodeiter = gennodelst(mnfst)
1309 1309 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1310 1310 yield chnk
1311 1311
1312 1312 for fname in changedfiles:
1313 1313 filerevlog = self.file(fname)
1314 1314 nodeiter = gennodelst(filerevlog)
1315 1315 nodeiter = list(nodeiter)
1316 1316 if nodeiter:
1317 1317 yield struct.pack(">l", len(fname) + 4) + fname
1318 1318 lookup = lookuprevlink_func(filerevlog)
1319 1319 for chnk in filerevlog.group(nodeiter, lookup):
1320 1320 yield chnk
1321 1321
1322 1322 yield struct.pack(">l", 0)
1323 1323 self.hook('outgoing', node=hex(nodes[0]), source=source)
1324 1324
1325 1325 return util.chunkbuffer(gengroup())
1326 1326
1327 1327 def addchangegroup(self, source):
1328 1328
1329 1329 def getchunk():
1330 1330 d = source.read(4)
1331 1331 if not d:
1332 1332 return ""
1333 1333 l = struct.unpack(">l", d)[0]
1334 1334 if l <= 4:
1335 1335 return ""
1336 1336 d = source.read(l - 4)
1337 1337 if len(d) < l - 4:
1338 1338 raise repo.RepoError(_("premature EOF reading chunk"
1339 1339 " (got %d bytes, expected %d)")
1340 1340 % (len(d), l - 4))
1341 1341 return d
1342 1342
1343 1343 def getgroup():
1344 1344 while 1:
1345 1345 c = getchunk()
1346 1346 if not c:
1347 1347 break
1348 1348 yield c
1349 1349
1350 1350 def csmap(x):
1351 1351 self.ui.debug(_("add changeset %s\n") % short(x))
1352 1352 return self.changelog.count()
1353 1353
1354 1354 def revmap(x):
1355 1355 return self.changelog.rev(x)
1356 1356
1357 1357 if not source:
1358 1358 return
1359 1359
1360 1360 self.hook('prechangegroup', throw=True)
1361 1361
1362 1362 changesets = files = revisions = 0
1363 1363
1364 1364 tr = self.transaction()
1365 1365
1366 1366 oldheads = len(self.changelog.heads())
1367 1367
1368 1368 # pull off the changeset group
1369 1369 self.ui.status(_("adding changesets\n"))
1370 1370 co = self.changelog.tip()
1371 1371 cn = self.changelog.addgroup(getgroup(), csmap, tr, 1) # unique
1372 1372 cnr, cor = map(self.changelog.rev, (cn, co))
1373 1373 if cn == nullid:
1374 1374 cnr = cor
1375 1375 changesets = cnr - cor
1376 1376
1377 1377 # pull off the manifest group
1378 1378 self.ui.status(_("adding manifests\n"))
1379 1379 mm = self.manifest.tip()
1380 1380 mo = self.manifest.addgroup(getgroup(), revmap, tr)
1381 1381
1382 1382 # process the files
1383 1383 self.ui.status(_("adding file changes\n"))
1384 1384 while 1:
1385 1385 f = getchunk()
1386 1386 if not f:
1387 1387 break
1388 1388 self.ui.debug(_("adding %s revisions\n") % f)
1389 1389 fl = self.file(f)
1390 1390 o = fl.count()
1391 1391 n = fl.addgroup(getgroup(), revmap, tr)
1392 1392 revisions += fl.count() - o
1393 1393 files += 1
1394 1394
1395 1395 newheads = len(self.changelog.heads())
1396 1396 heads = ""
1397 1397 if oldheads and newheads > oldheads:
1398 1398 heads = _(" (+%d heads)") % (newheads - oldheads)
1399 1399
1400 1400 self.ui.status(_("added %d changesets"
1401 1401 " with %d changes to %d files%s\n")
1402 1402 % (changesets, revisions, files, heads))
1403 1403
1404 1404 self.hook('pretxnchangegroup', throw=True,
1405 1405 node=hex(self.changelog.node(cor+1)))
1406 1406
1407 1407 tr.close()
1408 1408
1409 1409 if changesets > 0:
1410 1410 self.hook("changegroup", node=hex(self.changelog.node(cor+1)))
1411 1411
1412 1412 for i in range(cor + 1, cnr + 1):
1413 1413 self.hook("incoming", node=hex(self.changelog.node(i)))
1414 1414
1415 1415 def update(self, node, allow=False, force=False, choose=None,
1416 1416 moddirstate=True, forcemerge=False, wlock=None):
1417 1417 pl = self.dirstate.parents()
1418 1418 if not force and pl[1] != nullid:
1419 1419 self.ui.warn(_("aborting: outstanding uncommitted merges\n"))
1420 1420 return 1
1421 1421
1422 1422 err = False
1423 1423
1424 1424 p1, p2 = pl[0], node
1425 1425 pa = self.changelog.ancestor(p1, p2)
1426 1426 m1n = self.changelog.read(p1)[0]
1427 1427 m2n = self.changelog.read(p2)[0]
1428 1428 man = self.manifest.ancestor(m1n, m2n)
1429 1429 m1 = self.manifest.read(m1n)
1430 1430 mf1 = self.manifest.readflags(m1n)
1431 1431 m2 = self.manifest.read(m2n).copy()
1432 1432 mf2 = self.manifest.readflags(m2n)
1433 1433 ma = self.manifest.read(man)
1434 1434 mfa = self.manifest.readflags(man)
1435 1435
1436 1436 modified, added, removed, deleted, unknown = self.changes()
1437 1437
1438 1438 # is this a jump, or a merge? i.e. is there a linear path
1439 1439 # from p1 to p2?
1440 1440 linear_path = (pa == p1 or pa == p2)
1441 1441
1442 1442 if allow and linear_path:
1443 1443 raise util.Abort(_("there is nothing to merge, "
1444 1444 "just use 'hg update'"))
1445 1445 if allow and not forcemerge:
1446 1446 if modified or added or removed:
1447 1447 raise util.Abort(_("outstanding uncommited changes"))
1448 1448 if not forcemerge and not force:
1449 1449 for f in unknown:
1450 1450 if f in m2:
1451 1451 t1 = self.wread(f)
1452 1452 t2 = self.file(f).read(m2[f])
1453 1453 if cmp(t1, t2) != 0:
1454 1454 raise util.Abort(_("'%s' already exists in the working"
1455 1455 " dir and differs from remote") % f)
1456 1456
1457 1457 # resolve the manifest to determine which files
1458 1458 # we care about merging
1459 1459 self.ui.note(_("resolving manifests\n"))
1460 1460 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1461 1461 (force, allow, moddirstate, linear_path))
1462 1462 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1463 1463 (short(man), short(m1n), short(m2n)))
1464 1464
1465 1465 merge = {}
1466 1466 get = {}
1467 1467 remove = []
1468 1468
1469 1469 # construct a working dir manifest
1470 1470 mw = m1.copy()
1471 1471 mfw = mf1.copy()
1472 1472 umap = dict.fromkeys(unknown)
1473 1473
1474 1474 for f in added + modified + unknown:
1475 1475 mw[f] = ""
1476 1476 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1477 1477
1478 1478 if moddirstate and not wlock:
1479 1479 wlock = self.wlock()
1480 1480
1481 1481 for f in deleted + removed:
1482 1482 if f in mw:
1483 1483 del mw[f]
1484 1484
1485 1485 # If we're jumping between revisions (as opposed to merging),
1486 1486 # and if neither the working directory nor the target rev has
1487 1487 # the file, then we need to remove it from the dirstate, to
1488 1488 # prevent the dirstate from listing the file when it is no
1489 1489 # longer in the manifest.
1490 1490 if moddirstate and linear_path and f not in m2:
1491 1491 self.dirstate.forget((f,))
1492 1492
1493 1493 # Compare manifests
1494 1494 for f, n in mw.iteritems():
1495 1495 if choose and not choose(f):
1496 1496 continue
1497 1497 if f in m2:
1498 1498 s = 0
1499 1499
1500 1500 # is the wfile new since m1, and match m2?
1501 1501 if f not in m1:
1502 1502 t1 = self.wread(f)
1503 1503 t2 = self.file(f).read(m2[f])
1504 1504 if cmp(t1, t2) == 0:
1505 1505 n = m2[f]
1506 1506 del t1, t2
1507 1507
1508 1508 # are files different?
1509 1509 if n != m2[f]:
1510 1510 a = ma.get(f, nullid)
1511 1511 # are both different from the ancestor?
1512 1512 if n != a and m2[f] != a:
1513 1513 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1514 1514 # merge executable bits
1515 1515 # "if we changed or they changed, change in merge"
1516 1516 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1517 1517 mode = ((a^b) | (a^c)) ^ a
1518 1518 merge[f] = (m1.get(f, nullid), m2[f], mode)
1519 1519 s = 1
1520 1520 # are we clobbering?
1521 1521 # is remote's version newer?
1522 1522 # or are we going back in time?
1523 1523 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1524 1524 self.ui.debug(_(" remote %s is newer, get\n") % f)
1525 1525 get[f] = m2[f]
1526 1526 s = 1
1527 1527 elif f in umap:
1528 1528 # this unknown file is the same as the checkout
1529 1529 get[f] = m2[f]
1530 1530
1531 1531 if not s and mfw[f] != mf2[f]:
1532 1532 if force:
1533 1533 self.ui.debug(_(" updating permissions for %s\n") % f)
1534 1534 util.set_exec(self.wjoin(f), mf2[f])
1535 1535 else:
1536 1536 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1537 1537 mode = ((a^b) | (a^c)) ^ a
1538 1538 if mode != b:
1539 1539 self.ui.debug(_(" updating permissions for %s\n")
1540 1540 % f)
1541 1541 util.set_exec(self.wjoin(f), mode)
1542 1542 del m2[f]
1543 1543 elif f in ma:
1544 1544 if n != ma[f]:
1545 1545 r = _("d")
1546 1546 if not force and (linear_path or allow):
1547 1547 r = self.ui.prompt(
1548 1548 (_(" local changed %s which remote deleted\n") % f) +
1549 1549 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1550 1550 if r == _("d"):
1551 1551 remove.append(f)
1552 1552 else:
1553 1553 self.ui.debug(_("other deleted %s\n") % f)
1554 1554 remove.append(f) # other deleted it
1555 1555 else:
1556 1556 # file is created on branch or in working directory
1557 1557 if force and f not in umap:
1558 1558 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1559 1559 remove.append(f)
1560 1560 elif n == m1.get(f, nullid): # same as parent
1561 1561 if p2 == pa: # going backwards?
1562 1562 self.ui.debug(_("remote deleted %s\n") % f)
1563 1563 remove.append(f)
1564 1564 else:
1565 1565 self.ui.debug(_("local modified %s, keeping\n") % f)
1566 1566 else:
1567 1567 self.ui.debug(_("working dir created %s, keeping\n") % f)
1568 1568
1569 1569 for f, n in m2.iteritems():
1570 1570 if choose and not choose(f):
1571 1571 continue
1572 1572 if f[0] == "/":
1573 1573 continue
1574 1574 if f in ma and n != ma[f]:
1575 1575 r = _("k")
1576 1576 if not force and (linear_path or allow):
1577 1577 r = self.ui.prompt(
1578 1578 (_("remote changed %s which local deleted\n") % f) +
1579 1579 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1580 1580 if r == _("k"):
1581 1581 get[f] = n
1582 1582 elif f not in ma:
1583 1583 self.ui.debug(_("remote created %s\n") % f)
1584 1584 get[f] = n
1585 1585 else:
1586 1586 if force or p2 == pa: # going backwards?
1587 1587 self.ui.debug(_("local deleted %s, recreating\n") % f)
1588 1588 get[f] = n
1589 1589 else:
1590 1590 self.ui.debug(_("local deleted %s\n") % f)
1591 1591
1592 1592 del mw, m1, m2, ma
1593 1593
1594 1594 if force:
1595 1595 for f in merge:
1596 1596 get[f] = merge[f][1]
1597 1597 merge = {}
1598 1598
1599 1599 if linear_path or force:
1600 1600 # we don't need to do any magic, just jump to the new rev
1601 1601 branch_merge = False
1602 1602 p1, p2 = p2, nullid
1603 1603 else:
1604 1604 if not allow:
1605 1605 self.ui.status(_("this update spans a branch"
1606 1606 " affecting the following files:\n"))
1607 1607 fl = merge.keys() + get.keys()
1608 1608 fl.sort()
1609 1609 for f in fl:
1610 1610 cf = ""
1611 1611 if f in merge:
1612 1612 cf = _(" (resolve)")
1613 1613 self.ui.status(" %s%s\n" % (f, cf))
1614 1614 self.ui.warn(_("aborting update spanning branches!\n"))
1615 1615 self.ui.status(_("(use update -m to merge across branches"
1616 1616 " or -C to lose changes)\n"))
1617 1617 return 1
1618 1618 branch_merge = True
1619 1619
1620 1620 # get the files we don't need to change
1621 1621 files = get.keys()
1622 1622 files.sort()
1623 1623 for f in files:
1624 1624 if f[0] == "/":
1625 1625 continue
1626 1626 self.ui.note(_("getting %s\n") % f)
1627 1627 t = self.file(f).read(get[f])
1628 1628 self.wwrite(f, t)
1629 1629 util.set_exec(self.wjoin(f), mf2[f])
1630 1630 if moddirstate:
1631 1631 if branch_merge:
1632 1632 self.dirstate.update([f], 'n', st_mtime=-1)
1633 1633 else:
1634 1634 self.dirstate.update([f], 'n')
1635 1635
1636 1636 # merge the tricky bits
1637 1637 files = merge.keys()
1638 1638 files.sort()
1639 1639 for f in files:
1640 1640 self.ui.status(_("merging %s\n") % f)
1641 1641 my, other, flag = merge[f]
1642 1642 ret = self.merge3(f, my, other)
1643 1643 if ret:
1644 1644 err = True
1645 1645 util.set_exec(self.wjoin(f), flag)
1646 1646 if moddirstate:
1647 1647 if branch_merge:
1648 1648 # We've done a branch merge, mark this file as merged
1649 1649 # so that we properly record the merger later
1650 1650 self.dirstate.update([f], 'm')
1651 1651 else:
1652 1652 # We've update-merged a locally modified file, so
1653 1653 # we set the dirstate to emulate a normal checkout
1654 1654 # of that file some time in the past. Thus our
1655 1655 # merge will appear as a normal local file
1656 1656 # modification.
1657 1657 f_len = len(self.file(f).read(other))
1658 1658 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1659 1659
1660 1660 remove.sort()
1661 1661 for f in remove:
1662 1662 self.ui.note(_("removing %s\n") % f)
1663 1663 try:
1664 1664 util.unlink(self.wjoin(f))
1665 1665 except OSError, inst:
1666 1666 if inst.errno != errno.ENOENT:
1667 1667 self.ui.warn(_("update failed to remove %s: %s!\n") %
1668 1668 (f, inst.strerror))
1669 1669 if moddirstate:
1670 1670 if branch_merge:
1671 1671 self.dirstate.update(remove, 'r')
1672 1672 else:
1673 1673 self.dirstate.forget(remove)
1674 1674
1675 1675 if moddirstate:
1676 1676 self.dirstate.setparents(p1, p2)
1677 1677 return err
1678 1678
1679 1679 def merge3(self, fn, my, other):
1680 1680 """perform a 3-way merge in the working directory"""
1681 1681
1682 1682 def temp(prefix, node):
1683 1683 pre = "%s~%s." % (os.path.basename(fn), prefix)
1684 1684 (fd, name) = tempfile.mkstemp("", pre)
1685 1685 f = os.fdopen(fd, "wb")
1686 1686 self.wwrite(fn, fl.read(node), f)
1687 1687 f.close()
1688 1688 return name
1689 1689
1690 1690 fl = self.file(fn)
1691 1691 base = fl.ancestor(my, other)
1692 1692 a = self.wjoin(fn)
1693 1693 b = temp("base", base)
1694 1694 c = temp("other", other)
1695 1695
1696 1696 self.ui.note(_("resolving %s\n") % fn)
1697 1697 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1698 1698 (fn, short(my), short(other), short(base)))
1699 1699
1700 1700 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1701 1701 or "hgmerge")
1702 1702 r = os.system('%s "%s" "%s" "%s"' % (cmd, a, b, c))
1703 1703 if r:
1704 1704 self.ui.warn(_("merging %s failed!\n") % fn)
1705 1705
1706 1706 os.unlink(b)
1707 1707 os.unlink(c)
1708 1708 return r
1709 1709
1710 1710 def verify(self):
1711 1711 filelinkrevs = {}
1712 1712 filenodes = {}
1713 1713 changesets = revisions = files = 0
1714 1714 errors = [0]
1715 1715 neededmanifests = {}
1716 1716
1717 1717 def err(msg):
1718 1718 self.ui.warn(msg + "\n")
1719 1719 errors[0] += 1
1720 1720
1721 1721 def checksize(obj, name):
1722 1722 d = obj.checksize()
1723 1723 if d[0]:
1724 1724 err(_("%s data length off by %d bytes") % (name, d[0]))
1725 1725 if d[1]:
1726 1726 err(_("%s index contains %d extra bytes") % (name, d[1]))
1727 1727
1728 1728 seen = {}
1729 1729 self.ui.status(_("checking changesets\n"))
1730 1730 checksize(self.changelog, "changelog")
1731 1731
1732 1732 for i in range(self.changelog.count()):
1733 1733 changesets += 1
1734 1734 n = self.changelog.node(i)
1735 1735 l = self.changelog.linkrev(n)
1736 1736 if l != i:
1737 1737 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
1738 1738 if n in seen:
1739 1739 err(_("duplicate changeset at revision %d") % i)
1740 1740 seen[n] = 1
1741 1741
1742 1742 for p in self.changelog.parents(n):
1743 1743 if p not in self.changelog.nodemap:
1744 1744 err(_("changeset %s has unknown parent %s") %
1745 1745 (short(n), short(p)))
1746 1746 try:
1747 1747 changes = self.changelog.read(n)
1748 1748 except KeyboardInterrupt:
1749 1749 self.ui.warn(_("interrupted"))
1750 1750 raise
1751 1751 except Exception, inst:
1752 1752 err(_("unpacking changeset %s: %s") % (short(n), inst))
1753 1753
1754 1754 neededmanifests[changes[0]] = n
1755 1755
1756 1756 for f in changes[3]:
1757 1757 filelinkrevs.setdefault(f, []).append(i)
1758 1758
1759 1759 seen = {}
1760 1760 self.ui.status(_("checking manifests\n"))
1761 1761 checksize(self.manifest, "manifest")
1762 1762
1763 1763 for i in range(self.manifest.count()):
1764 1764 n = self.manifest.node(i)
1765 1765 l = self.manifest.linkrev(n)
1766 1766
1767 1767 if l < 0 or l >= self.changelog.count():
1768 1768 err(_("bad manifest link (%d) at revision %d") % (l, i))
1769 1769
1770 1770 if n in neededmanifests:
1771 1771 del neededmanifests[n]
1772 1772
1773 1773 if n in seen:
1774 1774 err(_("duplicate manifest at revision %d") % i)
1775 1775
1776 1776 seen[n] = 1
1777 1777
1778 1778 for p in self.manifest.parents(n):
1779 1779 if p not in self.manifest.nodemap:
1780 1780 err(_("manifest %s has unknown parent %s") %
1781 1781 (short(n), short(p)))
1782 1782
1783 1783 try:
1784 1784 delta = mdiff.patchtext(self.manifest.delta(n))
1785 1785 except KeyboardInterrupt:
1786 1786 self.ui.warn(_("interrupted"))
1787 1787 raise
1788 1788 except Exception, inst:
1789 1789 err(_("unpacking manifest %s: %s") % (short(n), inst))
1790 1790
1791 1791 ff = [ l.split('\0') for l in delta.splitlines() ]
1792 1792 for f, fn in ff:
1793 1793 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
1794 1794
1795 1795 self.ui.status(_("crosschecking files in changesets and manifests\n"))
1796 1796
1797 1797 for m, c in neededmanifests.items():
1798 1798 err(_("Changeset %s refers to unknown manifest %s") %
1799 1799 (short(m), short(c)))
1800 1800 del neededmanifests
1801 1801
1802 1802 for f in filenodes:
1803 1803 if f not in filelinkrevs:
1804 1804 err(_("file %s in manifest but not in changesets") % f)
1805 1805
1806 1806 for f in filelinkrevs:
1807 1807 if f not in filenodes:
1808 1808 err(_("file %s in changeset but not in manifest") % f)
1809 1809
1810 1810 self.ui.status(_("checking files\n"))
1811 1811 ff = filenodes.keys()
1812 1812 ff.sort()
1813 1813 for f in ff:
1814 1814 if f == "/dev/null":
1815 1815 continue
1816 1816 files += 1
1817 1817 fl = self.file(f)
1818 1818 checksize(fl, f)
1819 1819
1820 1820 nodes = {nullid: 1}
1821 1821 seen = {}
1822 1822 for i in range(fl.count()):
1823 1823 revisions += 1
1824 1824 n = fl.node(i)
1825 1825
1826 1826 if n in seen:
1827 1827 err(_("%s: duplicate revision %d") % (f, i))
1828 1828 if n not in filenodes[f]:
1829 1829 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
1830 1830 else:
1831 1831 del filenodes[f][n]
1832 1832
1833 1833 flr = fl.linkrev(n)
1834 1834 if flr not in filelinkrevs[f]:
1835 1835 err(_("%s:%s points to unexpected changeset %d")
1836 1836 % (f, short(n), flr))
1837 1837 else:
1838 1838 filelinkrevs[f].remove(flr)
1839 1839
1840 1840 # verify contents
1841 1841 try:
1842 1842 t = fl.read(n)
1843 1843 except KeyboardInterrupt:
1844 1844 self.ui.warn(_("interrupted"))
1845 1845 raise
1846 1846 except Exception, inst:
1847 1847 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
1848 1848
1849 1849 # verify parents
1850 1850 (p1, p2) = fl.parents(n)
1851 1851 if p1 not in nodes:
1852 1852 err(_("file %s:%s unknown parent 1 %s") %
1853 1853 (f, short(n), short(p1)))
1854 1854 if p2 not in nodes:
1855 1855 err(_("file %s:%s unknown parent 2 %s") %
1856 1856 (f, short(n), short(p1)))
1857 1857 nodes[n] = 1
1858 1858
1859 1859 # cross-check
1860 1860 for node in filenodes[f]:
1861 1861 err(_("node %s in manifests not in %s") % (hex(node), f))
1862 1862
1863 1863 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
1864 1864 (files, changesets, revisions))
1865 1865
1866 1866 if errors[0]:
1867 1867 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
1868 1868 return 1
General Comments 0
You need to be logged in to leave comments. Login now