##// END OF EJS Templates
add a timeout when a lock is held (default 1024 sec)...
Benoit Boissinot -
r1787:e431344e default
parent child Browse files
Show More
@@ -1,310 +1,313 b''
1 1 HGRC(5)
2 2 =======
3 3 Bryan O'Sullivan <bos@serpentine.com>
4 4
5 5 NAME
6 6 ----
7 7 hgrc - configuration files for Mercurial
8 8
9 9 SYNOPSIS
10 10 --------
11 11
12 12 The Mercurial system uses a set of configuration files to control
13 13 aspects of its behaviour.
14 14
15 15 FILES
16 16 -----
17 17
18 18 Mercurial reads configuration data from several files, if they exist.
19 19 The names of these files depend on the system on which Mercurial is
20 20 installed.
21 21
22 22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 23 (Unix) <install-root>/etc/mercurial/hgrc::
24 24 Per-installation configuration files, searched for in the
25 25 directory where Mercurial is installed. For example, if installed
26 26 in /shared/tools, Mercurial will look in
27 27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 28 all Mercurial commands executed by any user in any directory.
29 29
30 30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 31 (Unix) /etc/mercurial/hgrc::
32 32 (Windows) C:\Mercurial\Mercurial.ini::
33 33 Per-system configuration files, for the system on which Mercurial
34 34 is running. Options in these files apply to all Mercurial
35 35 commands executed by any user in any directory. Options in these
36 36 files override per-installation options.
37 37
38 38 (Unix) $HOME/.hgrc::
39 39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
40 40 Per-user configuration file, for the user running Mercurial.
41 41 Options in this file apply to all Mercurial commands executed by
42 42 any user in any directory. Options in this file override
43 43 per-installation and per-system options.
44 44
45 45 (Unix, Windows) <repo>/.hg/hgrc::
46 46 Per-repository configuration options that only apply in a
47 47 particular repository. This file is not version-controlled, and
48 48 will not get transferred during a "clone" operation. Options in
49 49 this file override options in all other configuration files.
50 50
51 51 SYNTAX
52 52 ------
53 53
54 54 A configuration file consists of sections, led by a "[section]" header
55 55 and followed by "name: value" entries; "name=value" is also accepted.
56 56
57 57 [spam]
58 58 eggs=ham
59 59 green=
60 60 eggs
61 61
62 62 Each line contains one entry. If the lines that follow are indented,
63 63 they are treated as continuations of that entry.
64 64
65 65 Leading whitespace is removed from values. Empty lines are skipped.
66 66
67 67 The optional values can contain format strings which refer to other
68 68 values in the same section, or values in a special DEFAULT section.
69 69
70 70 Lines beginning with "#" or ";" are ignored and may be used to provide
71 71 comments.
72 72
73 73 SECTIONS
74 74 --------
75 75
76 76 This section describes the different sections that may appear in a
77 77 Mercurial "hgrc" file, the purpose of each section, its possible
78 78 keys, and their possible values.
79 79
80 80 decode/encode::
81 81 Filters for transforming files on checkout/checkin. This would
82 82 typically be used for newline processing or other
83 83 localization/canonicalization of files.
84 84
85 85 Filters consist of a filter pattern followed by a filter command.
86 86 Filter patterns are globs by default, rooted at the repository
87 87 root. For example, to match any file ending in ".txt" in the root
88 88 directory only, use the pattern "*.txt". To match any file ending
89 89 in ".c" anywhere in the repository, use the pattern "**.c".
90 90
91 91 The filter command can start with a specifier, either "pipe:" or
92 92 "tempfile:". If no specifier is given, "pipe:" is used by default.
93 93
94 94 A "pipe:" command must accept data on stdin and return the
95 95 transformed data on stdout.
96 96
97 97 Pipe example:
98 98
99 99 [encode]
100 100 # uncompress gzip files on checkin to improve delta compression
101 101 # note: not necessarily a good idea, just an example
102 102 *.gz = pipe: gunzip
103 103
104 104 [decode]
105 105 # recompress gzip files when writing them to the working dir (we
106 106 # can safely omit "pipe:", because it's the default)
107 107 *.gz = gzip
108 108
109 109 A "tempfile:" command is a template. The string INFILE is replaced
110 110 with the name of a temporary file that contains the data to be
111 111 filtered by the command. The string OUTFILE is replaced with the
112 112 name of an empty temporary file, where the filtered data must be
113 113 written by the command.
114 114
115 115 NOTE: the tempfile mechanism is recommended for Windows systems,
116 116 where the standard shell I/O redirection operators often have
117 117 strange effects. In particular, if you are doing line ending
118 118 conversion on Windows using the popular dos2unix and unix2dos
119 119 programs, you *must* use the tempfile mechanism, as using pipes will
120 120 corrupt the contents of your files.
121 121
122 122 Tempfile example:
123 123
124 124 [encode]
125 125 # convert files to unix line ending conventions on checkin
126 126 **.txt = tempfile: dos2unix -n INFILE OUTFILE
127 127
128 128 [decode]
129 129 # convert files to windows line ending conventions when writing
130 130 # them to the working dir
131 131 **.txt = tempfile: unix2dos -n INFILE OUTFILE
132 132
133 133 hooks::
134 134 Commands that get automatically executed by various actions such as
135 135 starting or finishing a commit. Multiple commands can be run for
136 136 the same action by appending a suffix to the action. Overriding a
137 137 site-wide hook can be done by changing its value or setting it to
138 138 an empty string.
139 139
140 140 Example .hg/hgrc:
141 141
142 142 [hooks]
143 143 # do not use the site-wide hook
144 144 incoming =
145 145 incoming.email = /my/email/hook
146 146 incoming.autobuild = /my/build/hook
147 147
148 148 Most hooks are run with environment variables set that give added
149 149 useful information. For each hook below, the environment variables
150 150 it is passed are listed with names of the form "$HG_foo".
151 151
152 152 changegroup;;
153 153 Run after a changegroup has been added via push, pull or
154 154 unbundle. ID of the first new changeset is in $HG_NODE.
155 155 commit;;
156 156 Run after a changeset has been created in the local repository.
157 157 ID of the newly created changeset is in $HG_NODE. Parent
158 158 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
159 159 incoming;;
160 160 Run after a changeset has been pulled, pushed, or unbundled into
161 161 the local repository. The ID of the newly arrived changeset is in
162 162 $HG_NODE.
163 163 outgoing;;
164 164 Run after sending changes from local repository to another. ID of
165 165 first changeset sent is in $HG_NODE. Source of operation is in
166 166 $HG_SOURCE; see "preoutgoing" hook for description.
167 167 prechangegroup;;
168 168 Run before a changegroup is added via push, pull or unbundle.
169 169 Exit status 0 allows the changegroup to proceed. Non-zero status
170 170 will cause the push, pull or unbundle to fail.
171 171 precommit;;
172 172 Run before starting a local commit. Exit status 0 allows the
173 173 commit to proceed. Non-zero status will cause the commit to fail.
174 174 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
175 175 preoutgoing;;
176 176 Run before computing changes to send from the local repository to
177 177 another. Non-zero status will cause failure. This lets you
178 178 prevent pull over http or ssh. Also prevents against local pull,
179 179 push (outbound) or bundle commands, but not effective, since you
180 180 can just copy files instead then. Source of operation is in
181 181 $HG_SOURCE. If "serve", operation is happening on behalf of
182 182 remote ssh or http repository. If "push", "pull" or "bundle",
183 183 operation is happening on behalf of repository on same system.
184 184 pretag;;
185 185 Run before creating a tag. Exit status 0 allows the tag to be
186 186 created. Non-zero status will cause the tag to fail. ID of
187 187 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
188 188 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
189 189 pretxnchangegroup;;
190 190 Run after a changegroup has been added via push, pull or unbundle,
191 191 but before the transaction has been committed. Changegroup is
192 192 visible to hook program. This lets you validate incoming changes
193 193 before accepting them. Passed the ID of the first new changeset
194 194 in $HG_NODE. Exit status 0 allows the transaction to commit.
195 195 Non-zero status will cause the transaction to be rolled back and
196 196 the push, pull or unbundle will fail.
197 197 pretxncommit;;
198 198 Run after a changeset has been created but the transaction not yet
199 199 committed. Changeset is visible to hook program. This lets you
200 200 validate commit message and changes. Exit status 0 allows the
201 201 commit to proceed. Non-zero status will cause the transaction to
202 202 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
203 203 IDs are in $HG_PARENT1 and $HG_PARENT2.
204 204 tag;;
205 205 Run after a tag is created. ID of tagged changeset is in
206 206 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
207 207 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
208 208
209 209 In earlier releases, the names of hook environment variables did not
210 210 have a "HG_" prefix. These unprefixed names are still provided in
211 211 the environment for backwards compatibility, but their use is
212 212 deprecated, and they will be removed in a future release.
213 213
214 214 http_proxy::
215 215 Used to access web-based Mercurial repositories through a HTTP
216 216 proxy.
217 217 host;;
218 218 Host name and (optional) port of the proxy server, for example
219 219 "myproxy:8000".
220 220 no;;
221 221 Optional. Comma-separated list of host names that should bypass
222 222 the proxy.
223 223 passwd;;
224 224 Optional. Password to authenticate with at the proxy server.
225 225 user;;
226 226 Optional. User name to authenticate with at the proxy server.
227 227
228 228 paths::
229 229 Assigns symbolic names to repositories. The left side is the
230 230 symbolic name, and the right gives the directory or URL that is the
231 231 location of the repository.
232 232
233 233 ui::
234 234 User interface controls.
235 235 debug;;
236 236 Print debugging information. True or False. Default is False.
237 237 editor;;
238 238 The editor to use during a commit. Default is $EDITOR or "vi".
239 239 interactive;;
240 240 Allow to prompt the user. True or False. Default is True.
241 241 merge;;
242 242 The conflict resolution program to use during a manual merge.
243 243 Default is "hgmerge".
244 244 quiet;;
245 245 Reduce the amount of output printed. True or False. Default is False.
246 246 remotecmd;;
247 247 remote command to use for clone/push/pull operations. Default is 'hg'.
248 248 ssh;;
249 249 command to use for SSH connections. Default is 'ssh'.
250 timeout;;
251 The timeout used when a lock is held (in seconds), a negative value
252 means no timeout. Default is 1024.
250 253 username;;
251 254 The committer of a changeset created when running "commit".
252 255 Typically a person's name and email address, e.g. "Fred Widget
253 256 <fred@example.com>". Default is $EMAIL or username@hostname.
254 257 verbose;;
255 258 Increase the amount of output printed. True or False. Default is False.
256 259
257 260
258 261 web::
259 262 Web interface configuration.
260 263 accesslog;;
261 264 Where to output the access log. Default is stdout.
262 265 address;;
263 266 Interface address to bind to. Default is all.
264 267 allowbz2;;
265 268 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
266 269 allowgz;;
267 270 Whether to allow .tar.gz downloading of repo revisions. Default is false.
268 271 allowpull;;
269 272 Whether to allow pulling from the repository. Default is true.
270 273 allowzip;;
271 274 Whether to allow .zip downloading of repo revisions. Default is false.
272 275 This feature creates temporary files.
273 276 description;;
274 277 Textual description of the repository's purpose or contents.
275 278 Default is "unknown".
276 279 errorlog;;
277 280 Where to output the error log. Default is stderr.
278 281 ipv6;;
279 282 Whether to use IPv6. Default is false.
280 283 name;;
281 284 Repository name to use in the web interface. Default is current
282 285 working directory.
283 286 maxchanges;;
284 287 Maximum number of changes to list on the changelog. Default is 10.
285 288 maxfiles;;
286 289 Maximum number of files to list per changeset. Default is 10.
287 290 port;;
288 291 Port to listen on. Default is 8000.
289 292 style;;
290 293 Which template map style to use.
291 294 templates;;
292 295 Where to find the HTML templates. Default is install path.
293 296
294 297
295 298 AUTHOR
296 299 ------
297 300 Bryan O'Sullivan <bos@serpentine.com>.
298 301
299 302 Mercurial was written by Matt Mackall <mpm@selenic.com>.
300 303
301 304 SEE ALSO
302 305 --------
303 306 hg(1)
304 307
305 308 COPYING
306 309 -------
307 310 This manual page is copyright 2005 Bryan O'Sullivan.
308 311 Mercurial is copyright 2005 Matt Mackall.
309 312 Free use of this software is granted under the terms of the GNU General
310 313 Public License (GPL).
@@ -1,1861 +1,1868 b''
1 1 # localrepo.py - read/write repository class for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 import struct, os, util
9 9 import filelog, manifest, changelog, dirstate, repo
10 10 from node import *
11 11 from i18n import gettext as _
12 12 from demandload import *
13 13 demandload(globals(), "re lock transaction tempfile stat mdiff errno")
14 14
15 15 class localrepository(object):
16 16 def __init__(self, ui, path=None, create=0):
17 17 if not path:
18 18 p = os.getcwd()
19 19 while not os.path.isdir(os.path.join(p, ".hg")):
20 20 oldp = p
21 21 p = os.path.dirname(p)
22 22 if p == oldp:
23 23 raise repo.RepoError(_("no repo found"))
24 24 path = p
25 25 self.path = os.path.join(path, ".hg")
26 26
27 27 if not create and not os.path.isdir(self.path):
28 28 raise repo.RepoError(_("repository %s not found") % path)
29 29
30 30 self.root = os.path.abspath(path)
31 31 self.ui = ui
32 32 self.opener = util.opener(self.path)
33 33 self.wopener = util.opener(self.root)
34 34 self.manifest = manifest.manifest(self.opener)
35 35 self.changelog = changelog.changelog(self.opener)
36 36 self.tagscache = None
37 37 self.nodetagscache = None
38 38 self.encodepats = None
39 39 self.decodepats = None
40 40
41 41 if create:
42 42 os.mkdir(self.path)
43 43 os.mkdir(self.join("data"))
44 44
45 45 self.dirstate = dirstate.dirstate(self.opener, ui, self.root)
46 46 try:
47 47 self.ui.readconfig(self.join("hgrc"))
48 48 except IOError:
49 49 pass
50 50
51 51 def hook(self, name, throw=False, **args):
52 52 def runhook(name, cmd):
53 53 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
54 54 old = {}
55 55 for k, v in args.items():
56 56 k = k.upper()
57 57 old['HG_' + k] = os.environ.get(k, None)
58 58 old[k] = os.environ.get(k, None)
59 59 os.environ['HG_' + k] = str(v)
60 60 os.environ[k] = str(v)
61 61
62 62 try:
63 63 # Hooks run in the repository root
64 64 olddir = os.getcwd()
65 65 os.chdir(self.root)
66 66 r = os.system(cmd)
67 67 finally:
68 68 for k, v in old.items():
69 69 if v is not None:
70 70 os.environ[k] = v
71 71 else:
72 72 del os.environ[k]
73 73
74 74 os.chdir(olddir)
75 75
76 76 if r:
77 77 desc, r = util.explain_exit(r)
78 78 if throw:
79 79 raise util.Abort(_('%s hook %s') % (name, desc))
80 80 self.ui.warn(_('error: %s hook %s\n') % (name, desc))
81 81 return False
82 82 return True
83 83
84 84 r = True
85 85 for hname, cmd in self.ui.configitems("hooks"):
86 86 s = hname.split(".")
87 87 if s[0] == name and cmd:
88 88 r = runhook(hname, cmd) and r
89 89 return r
90 90
91 91 def tags(self):
92 92 '''return a mapping of tag to node'''
93 93 if not self.tagscache:
94 94 self.tagscache = {}
95 95 def addtag(self, k, n):
96 96 try:
97 97 bin_n = bin(n)
98 98 except TypeError:
99 99 bin_n = ''
100 100 self.tagscache[k.strip()] = bin_n
101 101
102 102 try:
103 103 # read each head of the tags file, ending with the tip
104 104 # and add each tag found to the map, with "newer" ones
105 105 # taking precedence
106 106 fl = self.file(".hgtags")
107 107 h = fl.heads()
108 108 h.reverse()
109 109 for r in h:
110 110 for l in fl.read(r).splitlines():
111 111 if l:
112 112 n, k = l.split(" ", 1)
113 113 addtag(self, k, n)
114 114 except KeyError:
115 115 pass
116 116
117 117 try:
118 118 f = self.opener("localtags")
119 119 for l in f:
120 120 n, k = l.split(" ", 1)
121 121 addtag(self, k, n)
122 122 except IOError:
123 123 pass
124 124
125 125 self.tagscache['tip'] = self.changelog.tip()
126 126
127 127 return self.tagscache
128 128
129 129 def tagslist(self):
130 130 '''return a list of tags ordered by revision'''
131 131 l = []
132 132 for t, n in self.tags().items():
133 133 try:
134 134 r = self.changelog.rev(n)
135 135 except:
136 136 r = -2 # sort to the beginning of the list if unknown
137 137 l.append((r, t, n))
138 138 l.sort()
139 139 return [(t, n) for r, t, n in l]
140 140
141 141 def nodetags(self, node):
142 142 '''return the tags associated with a node'''
143 143 if not self.nodetagscache:
144 144 self.nodetagscache = {}
145 145 for t, n in self.tags().items():
146 146 self.nodetagscache.setdefault(n, []).append(t)
147 147 return self.nodetagscache.get(node, [])
148 148
149 149 def lookup(self, key):
150 150 try:
151 151 return self.tags()[key]
152 152 except KeyError:
153 153 try:
154 154 return self.changelog.lookup(key)
155 155 except:
156 156 raise repo.RepoError(_("unknown revision '%s'") % key)
157 157
158 158 def dev(self):
159 159 return os.stat(self.path).st_dev
160 160
161 161 def local(self):
162 162 return True
163 163
164 164 def join(self, f):
165 165 return os.path.join(self.path, f)
166 166
167 167 def wjoin(self, f):
168 168 return os.path.join(self.root, f)
169 169
170 170 def file(self, f):
171 171 if f[0] == '/':
172 172 f = f[1:]
173 173 return filelog.filelog(self.opener, f)
174 174
175 175 def getcwd(self):
176 176 return self.dirstate.getcwd()
177 177
178 178 def wfile(self, f, mode='r'):
179 179 return self.wopener(f, mode)
180 180
181 181 def wread(self, filename):
182 182 if self.encodepats == None:
183 183 l = []
184 184 for pat, cmd in self.ui.configitems("encode"):
185 185 mf = util.matcher("", "/", [pat], [], [])[1]
186 186 l.append((mf, cmd))
187 187 self.encodepats = l
188 188
189 189 data = self.wopener(filename, 'r').read()
190 190
191 191 for mf, cmd in self.encodepats:
192 192 if mf(filename):
193 193 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
194 194 data = util.filter(data, cmd)
195 195 break
196 196
197 197 return data
198 198
199 199 def wwrite(self, filename, data, fd=None):
200 200 if self.decodepats == None:
201 201 l = []
202 202 for pat, cmd in self.ui.configitems("decode"):
203 203 mf = util.matcher("", "/", [pat], [], [])[1]
204 204 l.append((mf, cmd))
205 205 self.decodepats = l
206 206
207 207 for mf, cmd in self.decodepats:
208 208 if mf(filename):
209 209 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
210 210 data = util.filter(data, cmd)
211 211 break
212 212
213 213 if fd:
214 214 return fd.write(data)
215 215 return self.wopener(filename, 'w').write(data)
216 216
217 217 def transaction(self):
218 218 # save dirstate for undo
219 219 try:
220 220 ds = self.opener("dirstate").read()
221 221 except IOError:
222 222 ds = ""
223 223 self.opener("journal.dirstate", "w").write(ds)
224 224
225 225 def after():
226 226 util.rename(self.join("journal"), self.join("undo"))
227 227 util.rename(self.join("journal.dirstate"),
228 228 self.join("undo.dirstate"))
229 229
230 230 return transaction.transaction(self.ui.warn, self.opener,
231 231 self.join("journal"), after)
232 232
233 233 def recover(self):
234 234 l = self.lock()
235 235 if os.path.exists(self.join("journal")):
236 236 self.ui.status(_("rolling back interrupted transaction\n"))
237 237 transaction.rollback(self.opener, self.join("journal"))
238 238 self.manifest = manifest.manifest(self.opener)
239 239 self.changelog = changelog.changelog(self.opener)
240 240 return True
241 241 else:
242 242 self.ui.warn(_("no interrupted transaction available\n"))
243 243 return False
244 244
245 245 def undo(self, wlock=None):
246 246 if not wlock:
247 247 wlock = self.wlock()
248 248 l = self.lock()
249 249 if os.path.exists(self.join("undo")):
250 250 self.ui.status(_("rolling back last transaction\n"))
251 251 transaction.rollback(self.opener, self.join("undo"))
252 252 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
253 253 self.dirstate.read()
254 254 else:
255 255 self.ui.warn(_("no undo information available\n"))
256 256
257 257 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None):
258 258 try:
259 259 l = lock.lock(self.join(lockname), 0, releasefn)
260 260 except lock.LockHeld, inst:
261 261 if not wait:
262 262 raise inst
263 263 self.ui.warn(_("waiting for lock held by %s\n") % inst.args[0])
264 l = lock.lock(self.join(lockname), wait, releasefn)
264 try:
265 # default to 1024 seconds timeout
266 l = lock.lock(self.join(lockname),
267 int(self.ui.config("ui", "timeout") or 1024),
268 releasefn)
269 except lock.LockHeld, inst:
270 raise util.Abort(_("timeout while waiting for "
271 "lock held by %s") % inst.args[0])
265 272 if acquirefn:
266 273 acquirefn()
267 274 return l
268 275
269 276 def lock(self, wait=1):
270 277 return self.do_lock("lock", wait)
271 278
272 279 def wlock(self, wait=1):
273 280 return self.do_lock("wlock", wait,
274 281 self.dirstate.write,
275 282 self.dirstate.read)
276 283
277 284 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
278 285 "determine whether a new filenode is needed"
279 286 fp1 = manifest1.get(filename, nullid)
280 287 fp2 = manifest2.get(filename, nullid)
281 288
282 289 if fp2 != nullid:
283 290 # is one parent an ancestor of the other?
284 291 fpa = filelog.ancestor(fp1, fp2)
285 292 if fpa == fp1:
286 293 fp1, fp2 = fp2, nullid
287 294 elif fpa == fp2:
288 295 fp2 = nullid
289 296
290 297 # is the file unmodified from the parent? report existing entry
291 298 if fp2 == nullid and text == filelog.read(fp1):
292 299 return (fp1, None, None)
293 300
294 301 return (None, fp1, fp2)
295 302
296 303 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
297 304 orig_parent = self.dirstate.parents()[0] or nullid
298 305 p1 = p1 or self.dirstate.parents()[0] or nullid
299 306 p2 = p2 or self.dirstate.parents()[1] or nullid
300 307 c1 = self.changelog.read(p1)
301 308 c2 = self.changelog.read(p2)
302 309 m1 = self.manifest.read(c1[0])
303 310 mf1 = self.manifest.readflags(c1[0])
304 311 m2 = self.manifest.read(c2[0])
305 312 changed = []
306 313
307 314 if orig_parent == p1:
308 315 update_dirstate = 1
309 316 else:
310 317 update_dirstate = 0
311 318
312 319 if not wlock:
313 320 wlock = self.wlock()
314 321 l = self.lock()
315 322 tr = self.transaction()
316 323 mm = m1.copy()
317 324 mfm = mf1.copy()
318 325 linkrev = self.changelog.count()
319 326 for f in files:
320 327 try:
321 328 t = self.wread(f)
322 329 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
323 330 r = self.file(f)
324 331 mfm[f] = tm
325 332
326 333 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
327 334 if entry:
328 335 mm[f] = entry
329 336 continue
330 337
331 338 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
332 339 changed.append(f)
333 340 if update_dirstate:
334 341 self.dirstate.update([f], "n")
335 342 except IOError:
336 343 try:
337 344 del mm[f]
338 345 del mfm[f]
339 346 if update_dirstate:
340 347 self.dirstate.forget([f])
341 348 except:
342 349 # deleted from p2?
343 350 pass
344 351
345 352 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
346 353 user = user or self.ui.username()
347 354 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
348 355 tr.close()
349 356 if update_dirstate:
350 357 self.dirstate.setparents(n, nullid)
351 358
352 359 def commit(self, files=None, text="", user=None, date=None,
353 360 match=util.always, force=False, wlock=None):
354 361 commit = []
355 362 remove = []
356 363 changed = []
357 364
358 365 if files:
359 366 for f in files:
360 367 s = self.dirstate.state(f)
361 368 if s in 'nmai':
362 369 commit.append(f)
363 370 elif s == 'r':
364 371 remove.append(f)
365 372 else:
366 373 self.ui.warn(_("%s not tracked!\n") % f)
367 374 else:
368 375 modified, added, removed, deleted, unknown = self.changes(match=match)
369 376 commit = modified + added
370 377 remove = removed
371 378
372 379 p1, p2 = self.dirstate.parents()
373 380 c1 = self.changelog.read(p1)
374 381 c2 = self.changelog.read(p2)
375 382 m1 = self.manifest.read(c1[0])
376 383 mf1 = self.manifest.readflags(c1[0])
377 384 m2 = self.manifest.read(c2[0])
378 385
379 386 if not commit and not remove and not force and p2 == nullid:
380 387 self.ui.status(_("nothing changed\n"))
381 388 return None
382 389
383 390 xp1 = hex(p1)
384 391 if p2 == nullid: xp2 = ''
385 392 else: xp2 = hex(p2)
386 393
387 394 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
388 395
389 396 if not wlock:
390 397 wlock = self.wlock()
391 398 l = self.lock()
392 399 tr = self.transaction()
393 400
394 401 # check in files
395 402 new = {}
396 403 linkrev = self.changelog.count()
397 404 commit.sort()
398 405 for f in commit:
399 406 self.ui.note(f + "\n")
400 407 try:
401 408 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
402 409 t = self.wread(f)
403 410 except IOError:
404 411 self.ui.warn(_("trouble committing %s!\n") % f)
405 412 raise
406 413
407 414 r = self.file(f)
408 415
409 416 meta = {}
410 417 cp = self.dirstate.copied(f)
411 418 if cp:
412 419 meta["copy"] = cp
413 420 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
414 421 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
415 422 fp1, fp2 = nullid, nullid
416 423 else:
417 424 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
418 425 if entry:
419 426 new[f] = entry
420 427 continue
421 428
422 429 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
423 430 # remember what we've added so that we can later calculate
424 431 # the files to pull from a set of changesets
425 432 changed.append(f)
426 433
427 434 # update manifest
428 435 m1 = m1.copy()
429 436 m1.update(new)
430 437 for f in remove:
431 438 if f in m1:
432 439 del m1[f]
433 440 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
434 441 (new, remove))
435 442
436 443 # add changeset
437 444 new = new.keys()
438 445 new.sort()
439 446
440 447 if not text:
441 448 edittext = [""]
442 449 if p2 != nullid:
443 450 edittext.append("HG: branch merge")
444 451 edittext.extend(["HG: changed %s" % f for f in changed])
445 452 edittext.extend(["HG: removed %s" % f for f in remove])
446 453 if not changed and not remove:
447 454 edittext.append("HG: no files changed")
448 455 edittext.append("")
449 456 # run editor in the repository root
450 457 olddir = os.getcwd()
451 458 os.chdir(self.root)
452 459 edittext = self.ui.edit("\n".join(edittext))
453 460 os.chdir(olddir)
454 461 if not edittext.rstrip():
455 462 return None
456 463 text = edittext
457 464
458 465 user = user or self.ui.username()
459 466 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
460 467 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
461 468 parent2=xp2)
462 469 tr.close()
463 470
464 471 self.dirstate.setparents(n)
465 472 self.dirstate.update(new, "n")
466 473 self.dirstate.forget(remove)
467 474
468 475 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
469 476 return n
470 477
471 478 def walk(self, node=None, files=[], match=util.always):
472 479 if node:
473 480 fdict = dict.fromkeys(files)
474 481 for fn in self.manifest.read(self.changelog.read(node)[0]):
475 482 fdict.pop(fn, None)
476 483 if match(fn):
477 484 yield 'm', fn
478 485 for fn in fdict:
479 486 self.ui.warn(_('%s: No such file in rev %s\n') % (
480 487 util.pathto(self.getcwd(), fn), short(node)))
481 488 else:
482 489 for src, fn in self.dirstate.walk(files, match):
483 490 yield src, fn
484 491
485 492 def changes(self, node1=None, node2=None, files=[], match=util.always,
486 493 wlock=None):
487 494 """return changes between two nodes or node and working directory
488 495
489 496 If node1 is None, use the first dirstate parent instead.
490 497 If node2 is None, compare node1 with working directory.
491 498 """
492 499
493 500 def fcmp(fn, mf):
494 501 t1 = self.wread(fn)
495 502 t2 = self.file(fn).read(mf.get(fn, nullid))
496 503 return cmp(t1, t2)
497 504
498 505 def mfmatches(node):
499 506 change = self.changelog.read(node)
500 507 mf = dict(self.manifest.read(change[0]))
501 508 for fn in mf.keys():
502 509 if not match(fn):
503 510 del mf[fn]
504 511 return mf
505 512
506 513 # are we comparing the working directory?
507 514 if not node2:
508 515 if not wlock:
509 516 try:
510 517 wlock = self.wlock(wait=0)
511 518 except lock.LockException:
512 519 wlock = None
513 520 lookup, modified, added, removed, deleted, unknown = (
514 521 self.dirstate.changes(files, match))
515 522
516 523 # are we comparing working dir against its parent?
517 524 if not node1:
518 525 if lookup:
519 526 # do a full compare of any files that might have changed
520 527 mf2 = mfmatches(self.dirstate.parents()[0])
521 528 for f in lookup:
522 529 if fcmp(f, mf2):
523 530 modified.append(f)
524 531 elif wlock is not None:
525 532 self.dirstate.update([f], "n")
526 533 else:
527 534 # we are comparing working dir against non-parent
528 535 # generate a pseudo-manifest for the working dir
529 536 mf2 = mfmatches(self.dirstate.parents()[0])
530 537 for f in lookup + modified + added:
531 538 mf2[f] = ""
532 539 for f in removed:
533 540 if f in mf2:
534 541 del mf2[f]
535 542 else:
536 543 # we are comparing two revisions
537 544 deleted, unknown = [], []
538 545 mf2 = mfmatches(node2)
539 546
540 547 if node1:
541 548 # flush lists from dirstate before comparing manifests
542 549 modified, added = [], []
543 550
544 551 mf1 = mfmatches(node1)
545 552
546 553 for fn in mf2:
547 554 if mf1.has_key(fn):
548 555 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
549 556 modified.append(fn)
550 557 del mf1[fn]
551 558 else:
552 559 added.append(fn)
553 560
554 561 removed = mf1.keys()
555 562
556 563 # sort and return results:
557 564 for l in modified, added, removed, deleted, unknown:
558 565 l.sort()
559 566 return (modified, added, removed, deleted, unknown)
560 567
561 568 def add(self, list, wlock=None):
562 569 if not wlock:
563 570 wlock = self.wlock()
564 571 for f in list:
565 572 p = self.wjoin(f)
566 573 if not os.path.exists(p):
567 574 self.ui.warn(_("%s does not exist!\n") % f)
568 575 elif not os.path.isfile(p):
569 576 self.ui.warn(_("%s not added: only files supported currently\n")
570 577 % f)
571 578 elif self.dirstate.state(f) in 'an':
572 579 self.ui.warn(_("%s already tracked!\n") % f)
573 580 else:
574 581 self.dirstate.update([f], "a")
575 582
576 583 def forget(self, list, wlock=None):
577 584 if not wlock:
578 585 wlock = self.wlock()
579 586 for f in list:
580 587 if self.dirstate.state(f) not in 'ai':
581 588 self.ui.warn(_("%s not added!\n") % f)
582 589 else:
583 590 self.dirstate.forget([f])
584 591
585 592 def remove(self, list, unlink=False, wlock=None):
586 593 if unlink:
587 594 for f in list:
588 595 try:
589 596 util.unlink(self.wjoin(f))
590 597 except OSError, inst:
591 598 if inst.errno != errno.ENOENT:
592 599 raise
593 600 if not wlock:
594 601 wlock = self.wlock()
595 602 for f in list:
596 603 p = self.wjoin(f)
597 604 if os.path.exists(p):
598 605 self.ui.warn(_("%s still exists!\n") % f)
599 606 elif self.dirstate.state(f) == 'a':
600 607 self.dirstate.forget([f])
601 608 elif f not in self.dirstate:
602 609 self.ui.warn(_("%s not tracked!\n") % f)
603 610 else:
604 611 self.dirstate.update([f], "r")
605 612
606 613 def undelete(self, list, wlock=None):
607 614 p = self.dirstate.parents()[0]
608 615 mn = self.changelog.read(p)[0]
609 616 mf = self.manifest.readflags(mn)
610 617 m = self.manifest.read(mn)
611 618 if not wlock:
612 619 wlock = self.wlock()
613 620 for f in list:
614 621 if self.dirstate.state(f) not in "r":
615 622 self.ui.warn("%s not removed!\n" % f)
616 623 else:
617 624 t = self.file(f).read(m[f])
618 625 self.wwrite(f, t)
619 626 util.set_exec(self.wjoin(f), mf[f])
620 627 self.dirstate.update([f], "n")
621 628
622 629 def copy(self, source, dest, wlock=None):
623 630 p = self.wjoin(dest)
624 631 if not os.path.exists(p):
625 632 self.ui.warn(_("%s does not exist!\n") % dest)
626 633 elif not os.path.isfile(p):
627 634 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
628 635 else:
629 636 if not wlock:
630 637 wlock = self.wlock()
631 638 if self.dirstate.state(dest) == '?':
632 639 self.dirstate.update([dest], "a")
633 640 self.dirstate.copy(source, dest)
634 641
635 642 def heads(self, start=None):
636 643 heads = self.changelog.heads(start)
637 644 # sort the output in rev descending order
638 645 heads = [(-self.changelog.rev(h), h) for h in heads]
639 646 heads.sort()
640 647 return [n for (r, n) in heads]
641 648
642 649 # branchlookup returns a dict giving a list of branches for
643 650 # each head. A branch is defined as the tag of a node or
644 651 # the branch of the node's parents. If a node has multiple
645 652 # branch tags, tags are eliminated if they are visible from other
646 653 # branch tags.
647 654 #
648 655 # So, for this graph: a->b->c->d->e
649 656 # \ /
650 657 # aa -----/
651 658 # a has tag 2.6.12
652 659 # d has tag 2.6.13
653 660 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
654 661 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
655 662 # from the list.
656 663 #
657 664 # It is possible that more than one head will have the same branch tag.
658 665 # callers need to check the result for multiple heads under the same
659 666 # branch tag if that is a problem for them (ie checkout of a specific
660 667 # branch).
661 668 #
662 669 # passing in a specific branch will limit the depth of the search
663 670 # through the parents. It won't limit the branches returned in the
664 671 # result though.
665 672 def branchlookup(self, heads=None, branch=None):
666 673 if not heads:
667 674 heads = self.heads()
668 675 headt = [ h for h in heads ]
669 676 chlog = self.changelog
670 677 branches = {}
671 678 merges = []
672 679 seenmerge = {}
673 680
674 681 # traverse the tree once for each head, recording in the branches
675 682 # dict which tags are visible from this head. The branches
676 683 # dict also records which tags are visible from each tag
677 684 # while we traverse.
678 685 while headt or merges:
679 686 if merges:
680 687 n, found = merges.pop()
681 688 visit = [n]
682 689 else:
683 690 h = headt.pop()
684 691 visit = [h]
685 692 found = [h]
686 693 seen = {}
687 694 while visit:
688 695 n = visit.pop()
689 696 if n in seen:
690 697 continue
691 698 pp = chlog.parents(n)
692 699 tags = self.nodetags(n)
693 700 if tags:
694 701 for x in tags:
695 702 if x == 'tip':
696 703 continue
697 704 for f in found:
698 705 branches.setdefault(f, {})[n] = 1
699 706 branches.setdefault(n, {})[n] = 1
700 707 break
701 708 if n not in found:
702 709 found.append(n)
703 710 if branch in tags:
704 711 continue
705 712 seen[n] = 1
706 713 if pp[1] != nullid and n not in seenmerge:
707 714 merges.append((pp[1], [x for x in found]))
708 715 seenmerge[n] = 1
709 716 if pp[0] != nullid:
710 717 visit.append(pp[0])
711 718 # traverse the branches dict, eliminating branch tags from each
712 719 # head that are visible from another branch tag for that head.
713 720 out = {}
714 721 viscache = {}
715 722 for h in heads:
716 723 def visible(node):
717 724 if node in viscache:
718 725 return viscache[node]
719 726 ret = {}
720 727 visit = [node]
721 728 while visit:
722 729 x = visit.pop()
723 730 if x in viscache:
724 731 ret.update(viscache[x])
725 732 elif x not in ret:
726 733 ret[x] = 1
727 734 if x in branches:
728 735 visit[len(visit):] = branches[x].keys()
729 736 viscache[node] = ret
730 737 return ret
731 738 if h not in branches:
732 739 continue
733 740 # O(n^2), but somewhat limited. This only searches the
734 741 # tags visible from a specific head, not all the tags in the
735 742 # whole repo.
736 743 for b in branches[h]:
737 744 vis = False
738 745 for bb in branches[h].keys():
739 746 if b != bb:
740 747 if b in visible(bb):
741 748 vis = True
742 749 break
743 750 if not vis:
744 751 l = out.setdefault(h, [])
745 752 l[len(l):] = self.nodetags(b)
746 753 return out
747 754
748 755 def branches(self, nodes):
749 756 if not nodes:
750 757 nodes = [self.changelog.tip()]
751 758 b = []
752 759 for n in nodes:
753 760 t = n
754 761 while n:
755 762 p = self.changelog.parents(n)
756 763 if p[1] != nullid or p[0] == nullid:
757 764 b.append((t, n, p[0], p[1]))
758 765 break
759 766 n = p[0]
760 767 return b
761 768
762 769 def between(self, pairs):
763 770 r = []
764 771
765 772 for top, bottom in pairs:
766 773 n, l, i = top, [], 0
767 774 f = 1
768 775
769 776 while n != bottom:
770 777 p = self.changelog.parents(n)[0]
771 778 if i == f:
772 779 l.append(n)
773 780 f = f * 2
774 781 n = p
775 782 i += 1
776 783
777 784 r.append(l)
778 785
779 786 return r
780 787
781 788 def findincoming(self, remote, base=None, heads=None):
782 789 m = self.changelog.nodemap
783 790 search = []
784 791 fetch = {}
785 792 seen = {}
786 793 seenbranch = {}
787 794 if base == None:
788 795 base = {}
789 796
790 797 # assume we're closer to the tip than the root
791 798 # and start by examining the heads
792 799 self.ui.status(_("searching for changes\n"))
793 800
794 801 if not heads:
795 802 heads = remote.heads()
796 803
797 804 unknown = []
798 805 for h in heads:
799 806 if h not in m:
800 807 unknown.append(h)
801 808 else:
802 809 base[h] = 1
803 810
804 811 if not unknown:
805 812 return None
806 813
807 814 rep = {}
808 815 reqcnt = 0
809 816
810 817 # search through remote branches
811 818 # a 'branch' here is a linear segment of history, with four parts:
812 819 # head, root, first parent, second parent
813 820 # (a branch always has two parents (or none) by definition)
814 821 unknown = remote.branches(unknown)
815 822 while unknown:
816 823 r = []
817 824 while unknown:
818 825 n = unknown.pop(0)
819 826 if n[0] in seen:
820 827 continue
821 828
822 829 self.ui.debug(_("examining %s:%s\n")
823 830 % (short(n[0]), short(n[1])))
824 831 if n[0] == nullid:
825 832 break
826 833 if n in seenbranch:
827 834 self.ui.debug(_("branch already found\n"))
828 835 continue
829 836 if n[1] and n[1] in m: # do we know the base?
830 837 self.ui.debug(_("found incomplete branch %s:%s\n")
831 838 % (short(n[0]), short(n[1])))
832 839 search.append(n) # schedule branch range for scanning
833 840 seenbranch[n] = 1
834 841 else:
835 842 if n[1] not in seen and n[1] not in fetch:
836 843 if n[2] in m and n[3] in m:
837 844 self.ui.debug(_("found new changeset %s\n") %
838 845 short(n[1]))
839 846 fetch[n[1]] = 1 # earliest unknown
840 847 base[n[2]] = 1 # latest known
841 848 continue
842 849
843 850 for a in n[2:4]:
844 851 if a not in rep:
845 852 r.append(a)
846 853 rep[a] = 1
847 854
848 855 seen[n[0]] = 1
849 856
850 857 if r:
851 858 reqcnt += 1
852 859 self.ui.debug(_("request %d: %s\n") %
853 860 (reqcnt, " ".join(map(short, r))))
854 861 for p in range(0, len(r), 10):
855 862 for b in remote.branches(r[p:p+10]):
856 863 self.ui.debug(_("received %s:%s\n") %
857 864 (short(b[0]), short(b[1])))
858 865 if b[0] in m:
859 866 self.ui.debug(_("found base node %s\n")
860 867 % short(b[0]))
861 868 base[b[0]] = 1
862 869 elif b[0] not in seen:
863 870 unknown.append(b)
864 871
865 872 # do binary search on the branches we found
866 873 while search:
867 874 n = search.pop(0)
868 875 reqcnt += 1
869 876 l = remote.between([(n[0], n[1])])[0]
870 877 l.append(n[1])
871 878 p = n[0]
872 879 f = 1
873 880 for i in l:
874 881 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
875 882 if i in m:
876 883 if f <= 2:
877 884 self.ui.debug(_("found new branch changeset %s\n") %
878 885 short(p))
879 886 fetch[p] = 1
880 887 base[i] = 1
881 888 else:
882 889 self.ui.debug(_("narrowed branch search to %s:%s\n")
883 890 % (short(p), short(i)))
884 891 search.append((p, i))
885 892 break
886 893 p, f = i, f * 2
887 894
888 895 # sanity check our fetch list
889 896 for f in fetch.keys():
890 897 if f in m:
891 898 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
892 899
893 900 if base.keys() == [nullid]:
894 901 self.ui.warn(_("warning: pulling from an unrelated repository!\n"))
895 902
896 903 self.ui.note(_("found new changesets starting at ") +
897 904 " ".join([short(f) for f in fetch]) + "\n")
898 905
899 906 self.ui.debug(_("%d total queries\n") % reqcnt)
900 907
901 908 return fetch.keys()
902 909
903 910 def findoutgoing(self, remote, base=None, heads=None):
904 911 if base == None:
905 912 base = {}
906 913 self.findincoming(remote, base, heads)
907 914
908 915 self.ui.debug(_("common changesets up to ")
909 916 + " ".join(map(short, base.keys())) + "\n")
910 917
911 918 remain = dict.fromkeys(self.changelog.nodemap)
912 919
913 920 # prune everything remote has from the tree
914 921 del remain[nullid]
915 922 remove = base.keys()
916 923 while remove:
917 924 n = remove.pop(0)
918 925 if n in remain:
919 926 del remain[n]
920 927 for p in self.changelog.parents(n):
921 928 remove.append(p)
922 929
923 930 # find every node whose parents have been pruned
924 931 subset = []
925 932 for n in remain:
926 933 p1, p2 = self.changelog.parents(n)
927 934 if p1 not in remain and p2 not in remain:
928 935 subset.append(n)
929 936
930 937 # this is the set of all roots we have to push
931 938 return subset
932 939
933 940 def pull(self, remote, heads=None):
934 941 l = self.lock()
935 942
936 943 # if we have an empty repo, fetch everything
937 944 if self.changelog.tip() == nullid:
938 945 self.ui.status(_("requesting all changes\n"))
939 946 fetch = [nullid]
940 947 else:
941 948 fetch = self.findincoming(remote)
942 949
943 950 if not fetch:
944 951 self.ui.status(_("no changes found\n"))
945 952 return 1
946 953
947 954 if heads is None:
948 955 cg = remote.changegroup(fetch, 'pull')
949 956 else:
950 957 cg = remote.changegroupsubset(fetch, heads, 'pull')
951 958 return self.addchangegroup(cg)
952 959
953 960 def push(self, remote, force=False, revs=None):
954 961 lock = remote.lock()
955 962
956 963 base = {}
957 964 heads = remote.heads()
958 965 inc = self.findincoming(remote, base, heads)
959 966 if not force and inc:
960 967 self.ui.warn(_("abort: unsynced remote changes!\n"))
961 968 self.ui.status(_("(did you forget to sync? use push -f to force)\n"))
962 969 return 1
963 970
964 971 update = self.findoutgoing(remote, base)
965 972 if revs is not None:
966 973 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
967 974 else:
968 975 bases, heads = update, self.changelog.heads()
969 976
970 977 if not bases:
971 978 self.ui.status(_("no changes found\n"))
972 979 return 1
973 980 elif not force:
974 981 if len(bases) < len(heads):
975 982 self.ui.warn(_("abort: push creates new remote branches!\n"))
976 983 self.ui.status(_("(did you forget to merge?"
977 984 " use push -f to force)\n"))
978 985 return 1
979 986
980 987 if revs is None:
981 988 cg = self.changegroup(update, 'push')
982 989 else:
983 990 cg = self.changegroupsubset(update, revs, 'push')
984 991 return remote.addchangegroup(cg)
985 992
986 993 def changegroupsubset(self, bases, heads, source):
987 994 """This function generates a changegroup consisting of all the nodes
988 995 that are descendents of any of the bases, and ancestors of any of
989 996 the heads.
990 997
991 998 It is fairly complex as determining which filenodes and which
992 999 manifest nodes need to be included for the changeset to be complete
993 1000 is non-trivial.
994 1001
995 1002 Another wrinkle is doing the reverse, figuring out which changeset in
996 1003 the changegroup a particular filenode or manifestnode belongs to."""
997 1004
998 1005 self.hook('preoutgoing', throw=True, source=source)
999 1006
1000 1007 # Set up some initial variables
1001 1008 # Make it easy to refer to self.changelog
1002 1009 cl = self.changelog
1003 1010 # msng is short for missing - compute the list of changesets in this
1004 1011 # changegroup.
1005 1012 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1006 1013 # Some bases may turn out to be superfluous, and some heads may be
1007 1014 # too. nodesbetween will return the minimal set of bases and heads
1008 1015 # necessary to re-create the changegroup.
1009 1016
1010 1017 # Known heads are the list of heads that it is assumed the recipient
1011 1018 # of this changegroup will know about.
1012 1019 knownheads = {}
1013 1020 # We assume that all parents of bases are known heads.
1014 1021 for n in bases:
1015 1022 for p in cl.parents(n):
1016 1023 if p != nullid:
1017 1024 knownheads[p] = 1
1018 1025 knownheads = knownheads.keys()
1019 1026 if knownheads:
1020 1027 # Now that we know what heads are known, we can compute which
1021 1028 # changesets are known. The recipient must know about all
1022 1029 # changesets required to reach the known heads from the null
1023 1030 # changeset.
1024 1031 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1025 1032 junk = None
1026 1033 # Transform the list into an ersatz set.
1027 1034 has_cl_set = dict.fromkeys(has_cl_set)
1028 1035 else:
1029 1036 # If there were no known heads, the recipient cannot be assumed to
1030 1037 # know about any changesets.
1031 1038 has_cl_set = {}
1032 1039
1033 1040 # Make it easy to refer to self.manifest
1034 1041 mnfst = self.manifest
1035 1042 # We don't know which manifests are missing yet
1036 1043 msng_mnfst_set = {}
1037 1044 # Nor do we know which filenodes are missing.
1038 1045 msng_filenode_set = {}
1039 1046
1040 1047 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1041 1048 junk = None
1042 1049
1043 1050 # A changeset always belongs to itself, so the changenode lookup
1044 1051 # function for a changenode is identity.
1045 1052 def identity(x):
1046 1053 return x
1047 1054
1048 1055 # A function generating function. Sets up an environment for the
1049 1056 # inner function.
1050 1057 def cmp_by_rev_func(revlog):
1051 1058 # Compare two nodes by their revision number in the environment's
1052 1059 # revision history. Since the revision number both represents the
1053 1060 # most efficient order to read the nodes in, and represents a
1054 1061 # topological sorting of the nodes, this function is often useful.
1055 1062 def cmp_by_rev(a, b):
1056 1063 return cmp(revlog.rev(a), revlog.rev(b))
1057 1064 return cmp_by_rev
1058 1065
1059 1066 # If we determine that a particular file or manifest node must be a
1060 1067 # node that the recipient of the changegroup will already have, we can
1061 1068 # also assume the recipient will have all the parents. This function
1062 1069 # prunes them from the set of missing nodes.
1063 1070 def prune_parents(revlog, hasset, msngset):
1064 1071 haslst = hasset.keys()
1065 1072 haslst.sort(cmp_by_rev_func(revlog))
1066 1073 for node in haslst:
1067 1074 parentlst = [p for p in revlog.parents(node) if p != nullid]
1068 1075 while parentlst:
1069 1076 n = parentlst.pop()
1070 1077 if n not in hasset:
1071 1078 hasset[n] = 1
1072 1079 p = [p for p in revlog.parents(n) if p != nullid]
1073 1080 parentlst.extend(p)
1074 1081 for n in hasset:
1075 1082 msngset.pop(n, None)
1076 1083
1077 1084 # This is a function generating function used to set up an environment
1078 1085 # for the inner function to execute in.
1079 1086 def manifest_and_file_collector(changedfileset):
1080 1087 # This is an information gathering function that gathers
1081 1088 # information from each changeset node that goes out as part of
1082 1089 # the changegroup. The information gathered is a list of which
1083 1090 # manifest nodes are potentially required (the recipient may
1084 1091 # already have them) and total list of all files which were
1085 1092 # changed in any changeset in the changegroup.
1086 1093 #
1087 1094 # We also remember the first changenode we saw any manifest
1088 1095 # referenced by so we can later determine which changenode 'owns'
1089 1096 # the manifest.
1090 1097 def collect_manifests_and_files(clnode):
1091 1098 c = cl.read(clnode)
1092 1099 for f in c[3]:
1093 1100 # This is to make sure we only have one instance of each
1094 1101 # filename string for each filename.
1095 1102 changedfileset.setdefault(f, f)
1096 1103 msng_mnfst_set.setdefault(c[0], clnode)
1097 1104 return collect_manifests_and_files
1098 1105
1099 1106 # Figure out which manifest nodes (of the ones we think might be part
1100 1107 # of the changegroup) the recipient must know about and remove them
1101 1108 # from the changegroup.
1102 1109 def prune_manifests():
1103 1110 has_mnfst_set = {}
1104 1111 for n in msng_mnfst_set:
1105 1112 # If a 'missing' manifest thinks it belongs to a changenode
1106 1113 # the recipient is assumed to have, obviously the recipient
1107 1114 # must have that manifest.
1108 1115 linknode = cl.node(mnfst.linkrev(n))
1109 1116 if linknode in has_cl_set:
1110 1117 has_mnfst_set[n] = 1
1111 1118 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1112 1119
1113 1120 # Use the information collected in collect_manifests_and_files to say
1114 1121 # which changenode any manifestnode belongs to.
1115 1122 def lookup_manifest_link(mnfstnode):
1116 1123 return msng_mnfst_set[mnfstnode]
1117 1124
1118 1125 # A function generating function that sets up the initial environment
1119 1126 # the inner function.
1120 1127 def filenode_collector(changedfiles):
1121 1128 next_rev = [0]
1122 1129 # This gathers information from each manifestnode included in the
1123 1130 # changegroup about which filenodes the manifest node references
1124 1131 # so we can include those in the changegroup too.
1125 1132 #
1126 1133 # It also remembers which changenode each filenode belongs to. It
1127 1134 # does this by assuming the a filenode belongs to the changenode
1128 1135 # the first manifest that references it belongs to.
1129 1136 def collect_msng_filenodes(mnfstnode):
1130 1137 r = mnfst.rev(mnfstnode)
1131 1138 if r == next_rev[0]:
1132 1139 # If the last rev we looked at was the one just previous,
1133 1140 # we only need to see a diff.
1134 1141 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1135 1142 # For each line in the delta
1136 1143 for dline in delta.splitlines():
1137 1144 # get the filename and filenode for that line
1138 1145 f, fnode = dline.split('\0')
1139 1146 fnode = bin(fnode[:40])
1140 1147 f = changedfiles.get(f, None)
1141 1148 # And if the file is in the list of files we care
1142 1149 # about.
1143 1150 if f is not None:
1144 1151 # Get the changenode this manifest belongs to
1145 1152 clnode = msng_mnfst_set[mnfstnode]
1146 1153 # Create the set of filenodes for the file if
1147 1154 # there isn't one already.
1148 1155 ndset = msng_filenode_set.setdefault(f, {})
1149 1156 # And set the filenode's changelog node to the
1150 1157 # manifest's if it hasn't been set already.
1151 1158 ndset.setdefault(fnode, clnode)
1152 1159 else:
1153 1160 # Otherwise we need a full manifest.
1154 1161 m = mnfst.read(mnfstnode)
1155 1162 # For every file in we care about.
1156 1163 for f in changedfiles:
1157 1164 fnode = m.get(f, None)
1158 1165 # If it's in the manifest
1159 1166 if fnode is not None:
1160 1167 # See comments above.
1161 1168 clnode = msng_mnfst_set[mnfstnode]
1162 1169 ndset = msng_filenode_set.setdefault(f, {})
1163 1170 ndset.setdefault(fnode, clnode)
1164 1171 # Remember the revision we hope to see next.
1165 1172 next_rev[0] = r + 1
1166 1173 return collect_msng_filenodes
1167 1174
1168 1175 # We have a list of filenodes we think we need for a file, lets remove
1169 1176 # all those we now the recipient must have.
1170 1177 def prune_filenodes(f, filerevlog):
1171 1178 msngset = msng_filenode_set[f]
1172 1179 hasset = {}
1173 1180 # If a 'missing' filenode thinks it belongs to a changenode we
1174 1181 # assume the recipient must have, then the recipient must have
1175 1182 # that filenode.
1176 1183 for n in msngset:
1177 1184 clnode = cl.node(filerevlog.linkrev(n))
1178 1185 if clnode in has_cl_set:
1179 1186 hasset[n] = 1
1180 1187 prune_parents(filerevlog, hasset, msngset)
1181 1188
1182 1189 # A function generator function that sets up the a context for the
1183 1190 # inner function.
1184 1191 def lookup_filenode_link_func(fname):
1185 1192 msngset = msng_filenode_set[fname]
1186 1193 # Lookup the changenode the filenode belongs to.
1187 1194 def lookup_filenode_link(fnode):
1188 1195 return msngset[fnode]
1189 1196 return lookup_filenode_link
1190 1197
1191 1198 # Now that we have all theses utility functions to help out and
1192 1199 # logically divide up the task, generate the group.
1193 1200 def gengroup():
1194 1201 # The set of changed files starts empty.
1195 1202 changedfiles = {}
1196 1203 # Create a changenode group generator that will call our functions
1197 1204 # back to lookup the owning changenode and collect information.
1198 1205 group = cl.group(msng_cl_lst, identity,
1199 1206 manifest_and_file_collector(changedfiles))
1200 1207 for chnk in group:
1201 1208 yield chnk
1202 1209
1203 1210 # The list of manifests has been collected by the generator
1204 1211 # calling our functions back.
1205 1212 prune_manifests()
1206 1213 msng_mnfst_lst = msng_mnfst_set.keys()
1207 1214 # Sort the manifestnodes by revision number.
1208 1215 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1209 1216 # Create a generator for the manifestnodes that calls our lookup
1210 1217 # and data collection functions back.
1211 1218 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1212 1219 filenode_collector(changedfiles))
1213 1220 for chnk in group:
1214 1221 yield chnk
1215 1222
1216 1223 # These are no longer needed, dereference and toss the memory for
1217 1224 # them.
1218 1225 msng_mnfst_lst = None
1219 1226 msng_mnfst_set.clear()
1220 1227
1221 1228 changedfiles = changedfiles.keys()
1222 1229 changedfiles.sort()
1223 1230 # Go through all our files in order sorted by name.
1224 1231 for fname in changedfiles:
1225 1232 filerevlog = self.file(fname)
1226 1233 # Toss out the filenodes that the recipient isn't really
1227 1234 # missing.
1228 1235 if msng_filenode_set.has_key(fname):
1229 1236 prune_filenodes(fname, filerevlog)
1230 1237 msng_filenode_lst = msng_filenode_set[fname].keys()
1231 1238 else:
1232 1239 msng_filenode_lst = []
1233 1240 # If any filenodes are left, generate the group for them,
1234 1241 # otherwise don't bother.
1235 1242 if len(msng_filenode_lst) > 0:
1236 1243 yield struct.pack(">l", len(fname) + 4) + fname
1237 1244 # Sort the filenodes by their revision #
1238 1245 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1239 1246 # Create a group generator and only pass in a changenode
1240 1247 # lookup function as we need to collect no information
1241 1248 # from filenodes.
1242 1249 group = filerevlog.group(msng_filenode_lst,
1243 1250 lookup_filenode_link_func(fname))
1244 1251 for chnk in group:
1245 1252 yield chnk
1246 1253 if msng_filenode_set.has_key(fname):
1247 1254 # Don't need this anymore, toss it to free memory.
1248 1255 del msng_filenode_set[fname]
1249 1256 # Signal that no more groups are left.
1250 1257 yield struct.pack(">l", 0)
1251 1258
1252 1259 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1253 1260
1254 1261 return util.chunkbuffer(gengroup())
1255 1262
1256 1263 def changegroup(self, basenodes, source):
1257 1264 """Generate a changegroup of all nodes that we have that a recipient
1258 1265 doesn't.
1259 1266
1260 1267 This is much easier than the previous function as we can assume that
1261 1268 the recipient has any changenode we aren't sending them."""
1262 1269
1263 1270 self.hook('preoutgoing', throw=True, source=source)
1264 1271
1265 1272 cl = self.changelog
1266 1273 nodes = cl.nodesbetween(basenodes, None)[0]
1267 1274 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1268 1275
1269 1276 def identity(x):
1270 1277 return x
1271 1278
1272 1279 def gennodelst(revlog):
1273 1280 for r in xrange(0, revlog.count()):
1274 1281 n = revlog.node(r)
1275 1282 if revlog.linkrev(n) in revset:
1276 1283 yield n
1277 1284
1278 1285 def changed_file_collector(changedfileset):
1279 1286 def collect_changed_files(clnode):
1280 1287 c = cl.read(clnode)
1281 1288 for fname in c[3]:
1282 1289 changedfileset[fname] = 1
1283 1290 return collect_changed_files
1284 1291
1285 1292 def lookuprevlink_func(revlog):
1286 1293 def lookuprevlink(n):
1287 1294 return cl.node(revlog.linkrev(n))
1288 1295 return lookuprevlink
1289 1296
1290 1297 def gengroup():
1291 1298 # construct a list of all changed files
1292 1299 changedfiles = {}
1293 1300
1294 1301 for chnk in cl.group(nodes, identity,
1295 1302 changed_file_collector(changedfiles)):
1296 1303 yield chnk
1297 1304 changedfiles = changedfiles.keys()
1298 1305 changedfiles.sort()
1299 1306
1300 1307 mnfst = self.manifest
1301 1308 nodeiter = gennodelst(mnfst)
1302 1309 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1303 1310 yield chnk
1304 1311
1305 1312 for fname in changedfiles:
1306 1313 filerevlog = self.file(fname)
1307 1314 nodeiter = gennodelst(filerevlog)
1308 1315 nodeiter = list(nodeiter)
1309 1316 if nodeiter:
1310 1317 yield struct.pack(">l", len(fname) + 4) + fname
1311 1318 lookup = lookuprevlink_func(filerevlog)
1312 1319 for chnk in filerevlog.group(nodeiter, lookup):
1313 1320 yield chnk
1314 1321
1315 1322 yield struct.pack(">l", 0)
1316 1323 self.hook('outgoing', node=hex(nodes[0]), source=source)
1317 1324
1318 1325 return util.chunkbuffer(gengroup())
1319 1326
1320 1327 def addchangegroup(self, source):
1321 1328
1322 1329 def getchunk():
1323 1330 d = source.read(4)
1324 1331 if not d:
1325 1332 return ""
1326 1333 l = struct.unpack(">l", d)[0]
1327 1334 if l <= 4:
1328 1335 return ""
1329 1336 d = source.read(l - 4)
1330 1337 if len(d) < l - 4:
1331 1338 raise repo.RepoError(_("premature EOF reading chunk"
1332 1339 " (got %d bytes, expected %d)")
1333 1340 % (len(d), l - 4))
1334 1341 return d
1335 1342
1336 1343 def getgroup():
1337 1344 while 1:
1338 1345 c = getchunk()
1339 1346 if not c:
1340 1347 break
1341 1348 yield c
1342 1349
1343 1350 def csmap(x):
1344 1351 self.ui.debug(_("add changeset %s\n") % short(x))
1345 1352 return self.changelog.count()
1346 1353
1347 1354 def revmap(x):
1348 1355 return self.changelog.rev(x)
1349 1356
1350 1357 if not source:
1351 1358 return
1352 1359
1353 1360 self.hook('prechangegroup', throw=True)
1354 1361
1355 1362 changesets = files = revisions = 0
1356 1363
1357 1364 tr = self.transaction()
1358 1365
1359 1366 oldheads = len(self.changelog.heads())
1360 1367
1361 1368 # pull off the changeset group
1362 1369 self.ui.status(_("adding changesets\n"))
1363 1370 co = self.changelog.tip()
1364 1371 cn = self.changelog.addgroup(getgroup(), csmap, tr, 1) # unique
1365 1372 cnr, cor = map(self.changelog.rev, (cn, co))
1366 1373 if cn == nullid:
1367 1374 cnr = cor
1368 1375 changesets = cnr - cor
1369 1376
1370 1377 # pull off the manifest group
1371 1378 self.ui.status(_("adding manifests\n"))
1372 1379 mm = self.manifest.tip()
1373 1380 mo = self.manifest.addgroup(getgroup(), revmap, tr)
1374 1381
1375 1382 # process the files
1376 1383 self.ui.status(_("adding file changes\n"))
1377 1384 while 1:
1378 1385 f = getchunk()
1379 1386 if not f:
1380 1387 break
1381 1388 self.ui.debug(_("adding %s revisions\n") % f)
1382 1389 fl = self.file(f)
1383 1390 o = fl.count()
1384 1391 n = fl.addgroup(getgroup(), revmap, tr)
1385 1392 revisions += fl.count() - o
1386 1393 files += 1
1387 1394
1388 1395 newheads = len(self.changelog.heads())
1389 1396 heads = ""
1390 1397 if oldheads and newheads > oldheads:
1391 1398 heads = _(" (+%d heads)") % (newheads - oldheads)
1392 1399
1393 1400 self.ui.status(_("added %d changesets"
1394 1401 " with %d changes to %d files%s\n")
1395 1402 % (changesets, revisions, files, heads))
1396 1403
1397 1404 self.hook('pretxnchangegroup', throw=True,
1398 1405 node=hex(self.changelog.node(cor+1)))
1399 1406
1400 1407 tr.close()
1401 1408
1402 1409 if changesets > 0:
1403 1410 self.hook("changegroup", node=hex(self.changelog.node(cor+1)))
1404 1411
1405 1412 for i in range(cor + 1, cnr + 1):
1406 1413 self.hook("incoming", node=hex(self.changelog.node(i)))
1407 1414
1408 1415 def update(self, node, allow=False, force=False, choose=None,
1409 1416 moddirstate=True, forcemerge=False, wlock=None):
1410 1417 pl = self.dirstate.parents()
1411 1418 if not force and pl[1] != nullid:
1412 1419 self.ui.warn(_("aborting: outstanding uncommitted merges\n"))
1413 1420 return 1
1414 1421
1415 1422 err = False
1416 1423
1417 1424 p1, p2 = pl[0], node
1418 1425 pa = self.changelog.ancestor(p1, p2)
1419 1426 m1n = self.changelog.read(p1)[0]
1420 1427 m2n = self.changelog.read(p2)[0]
1421 1428 man = self.manifest.ancestor(m1n, m2n)
1422 1429 m1 = self.manifest.read(m1n)
1423 1430 mf1 = self.manifest.readflags(m1n)
1424 1431 m2 = self.manifest.read(m2n).copy()
1425 1432 mf2 = self.manifest.readflags(m2n)
1426 1433 ma = self.manifest.read(man)
1427 1434 mfa = self.manifest.readflags(man)
1428 1435
1429 1436 modified, added, removed, deleted, unknown = self.changes()
1430 1437
1431 1438 # is this a jump, or a merge? i.e. is there a linear path
1432 1439 # from p1 to p2?
1433 1440 linear_path = (pa == p1 or pa == p2)
1434 1441
1435 1442 if allow and linear_path:
1436 1443 raise util.Abort(_("there is nothing to merge, "
1437 1444 "just use 'hg update'"))
1438 1445 if allow and not forcemerge:
1439 1446 if modified or added or removed:
1440 1447 raise util.Abort(_("outstanding uncommited changes"))
1441 1448 if not forcemerge and not force:
1442 1449 for f in unknown:
1443 1450 if f in m2:
1444 1451 t1 = self.wread(f)
1445 1452 t2 = self.file(f).read(m2[f])
1446 1453 if cmp(t1, t2) != 0:
1447 1454 raise util.Abort(_("'%s' already exists in the working"
1448 1455 " dir and differs from remote") % f)
1449 1456
1450 1457 # resolve the manifest to determine which files
1451 1458 # we care about merging
1452 1459 self.ui.note(_("resolving manifests\n"))
1453 1460 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1454 1461 (force, allow, moddirstate, linear_path))
1455 1462 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1456 1463 (short(man), short(m1n), short(m2n)))
1457 1464
1458 1465 merge = {}
1459 1466 get = {}
1460 1467 remove = []
1461 1468
1462 1469 # construct a working dir manifest
1463 1470 mw = m1.copy()
1464 1471 mfw = mf1.copy()
1465 1472 umap = dict.fromkeys(unknown)
1466 1473
1467 1474 for f in added + modified + unknown:
1468 1475 mw[f] = ""
1469 1476 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1470 1477
1471 1478 if moddirstate and not wlock:
1472 1479 wlock = self.wlock()
1473 1480
1474 1481 for f in deleted + removed:
1475 1482 if f in mw:
1476 1483 del mw[f]
1477 1484
1478 1485 # If we're jumping between revisions (as opposed to merging),
1479 1486 # and if neither the working directory nor the target rev has
1480 1487 # the file, then we need to remove it from the dirstate, to
1481 1488 # prevent the dirstate from listing the file when it is no
1482 1489 # longer in the manifest.
1483 1490 if moddirstate and linear_path and f not in m2:
1484 1491 self.dirstate.forget((f,))
1485 1492
1486 1493 # Compare manifests
1487 1494 for f, n in mw.iteritems():
1488 1495 if choose and not choose(f):
1489 1496 continue
1490 1497 if f in m2:
1491 1498 s = 0
1492 1499
1493 1500 # is the wfile new since m1, and match m2?
1494 1501 if f not in m1:
1495 1502 t1 = self.wread(f)
1496 1503 t2 = self.file(f).read(m2[f])
1497 1504 if cmp(t1, t2) == 0:
1498 1505 n = m2[f]
1499 1506 del t1, t2
1500 1507
1501 1508 # are files different?
1502 1509 if n != m2[f]:
1503 1510 a = ma.get(f, nullid)
1504 1511 # are both different from the ancestor?
1505 1512 if n != a and m2[f] != a:
1506 1513 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1507 1514 # merge executable bits
1508 1515 # "if we changed or they changed, change in merge"
1509 1516 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1510 1517 mode = ((a^b) | (a^c)) ^ a
1511 1518 merge[f] = (m1.get(f, nullid), m2[f], mode)
1512 1519 s = 1
1513 1520 # are we clobbering?
1514 1521 # is remote's version newer?
1515 1522 # or are we going back in time?
1516 1523 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1517 1524 self.ui.debug(_(" remote %s is newer, get\n") % f)
1518 1525 get[f] = m2[f]
1519 1526 s = 1
1520 1527 elif f in umap:
1521 1528 # this unknown file is the same as the checkout
1522 1529 get[f] = m2[f]
1523 1530
1524 1531 if not s and mfw[f] != mf2[f]:
1525 1532 if force:
1526 1533 self.ui.debug(_(" updating permissions for %s\n") % f)
1527 1534 util.set_exec(self.wjoin(f), mf2[f])
1528 1535 else:
1529 1536 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1530 1537 mode = ((a^b) | (a^c)) ^ a
1531 1538 if mode != b:
1532 1539 self.ui.debug(_(" updating permissions for %s\n")
1533 1540 % f)
1534 1541 util.set_exec(self.wjoin(f), mode)
1535 1542 del m2[f]
1536 1543 elif f in ma:
1537 1544 if n != ma[f]:
1538 1545 r = _("d")
1539 1546 if not force and (linear_path or allow):
1540 1547 r = self.ui.prompt(
1541 1548 (_(" local changed %s which remote deleted\n") % f) +
1542 1549 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1543 1550 if r == _("d"):
1544 1551 remove.append(f)
1545 1552 else:
1546 1553 self.ui.debug(_("other deleted %s\n") % f)
1547 1554 remove.append(f) # other deleted it
1548 1555 else:
1549 1556 # file is created on branch or in working directory
1550 1557 if force and f not in umap:
1551 1558 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1552 1559 remove.append(f)
1553 1560 elif n == m1.get(f, nullid): # same as parent
1554 1561 if p2 == pa: # going backwards?
1555 1562 self.ui.debug(_("remote deleted %s\n") % f)
1556 1563 remove.append(f)
1557 1564 else:
1558 1565 self.ui.debug(_("local modified %s, keeping\n") % f)
1559 1566 else:
1560 1567 self.ui.debug(_("working dir created %s, keeping\n") % f)
1561 1568
1562 1569 for f, n in m2.iteritems():
1563 1570 if choose and not choose(f):
1564 1571 continue
1565 1572 if f[0] == "/":
1566 1573 continue
1567 1574 if f in ma and n != ma[f]:
1568 1575 r = _("k")
1569 1576 if not force and (linear_path or allow):
1570 1577 r = self.ui.prompt(
1571 1578 (_("remote changed %s which local deleted\n") % f) +
1572 1579 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1573 1580 if r == _("k"):
1574 1581 get[f] = n
1575 1582 elif f not in ma:
1576 1583 self.ui.debug(_("remote created %s\n") % f)
1577 1584 get[f] = n
1578 1585 else:
1579 1586 if force or p2 == pa: # going backwards?
1580 1587 self.ui.debug(_("local deleted %s, recreating\n") % f)
1581 1588 get[f] = n
1582 1589 else:
1583 1590 self.ui.debug(_("local deleted %s\n") % f)
1584 1591
1585 1592 del mw, m1, m2, ma
1586 1593
1587 1594 if force:
1588 1595 for f in merge:
1589 1596 get[f] = merge[f][1]
1590 1597 merge = {}
1591 1598
1592 1599 if linear_path or force:
1593 1600 # we don't need to do any magic, just jump to the new rev
1594 1601 branch_merge = False
1595 1602 p1, p2 = p2, nullid
1596 1603 else:
1597 1604 if not allow:
1598 1605 self.ui.status(_("this update spans a branch"
1599 1606 " affecting the following files:\n"))
1600 1607 fl = merge.keys() + get.keys()
1601 1608 fl.sort()
1602 1609 for f in fl:
1603 1610 cf = ""
1604 1611 if f in merge:
1605 1612 cf = _(" (resolve)")
1606 1613 self.ui.status(" %s%s\n" % (f, cf))
1607 1614 self.ui.warn(_("aborting update spanning branches!\n"))
1608 1615 self.ui.status(_("(use update -m to merge across branches"
1609 1616 " or -C to lose changes)\n"))
1610 1617 return 1
1611 1618 branch_merge = True
1612 1619
1613 1620 # get the files we don't need to change
1614 1621 files = get.keys()
1615 1622 files.sort()
1616 1623 for f in files:
1617 1624 if f[0] == "/":
1618 1625 continue
1619 1626 self.ui.note(_("getting %s\n") % f)
1620 1627 t = self.file(f).read(get[f])
1621 1628 self.wwrite(f, t)
1622 1629 util.set_exec(self.wjoin(f), mf2[f])
1623 1630 if moddirstate:
1624 1631 if branch_merge:
1625 1632 self.dirstate.update([f], 'n', st_mtime=-1)
1626 1633 else:
1627 1634 self.dirstate.update([f], 'n')
1628 1635
1629 1636 # merge the tricky bits
1630 1637 files = merge.keys()
1631 1638 files.sort()
1632 1639 for f in files:
1633 1640 self.ui.status(_("merging %s\n") % f)
1634 1641 my, other, flag = merge[f]
1635 1642 ret = self.merge3(f, my, other)
1636 1643 if ret:
1637 1644 err = True
1638 1645 util.set_exec(self.wjoin(f), flag)
1639 1646 if moddirstate:
1640 1647 if branch_merge:
1641 1648 # We've done a branch merge, mark this file as merged
1642 1649 # so that we properly record the merger later
1643 1650 self.dirstate.update([f], 'm')
1644 1651 else:
1645 1652 # We've update-merged a locally modified file, so
1646 1653 # we set the dirstate to emulate a normal checkout
1647 1654 # of that file some time in the past. Thus our
1648 1655 # merge will appear as a normal local file
1649 1656 # modification.
1650 1657 f_len = len(self.file(f).read(other))
1651 1658 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1652 1659
1653 1660 remove.sort()
1654 1661 for f in remove:
1655 1662 self.ui.note(_("removing %s\n") % f)
1656 1663 try:
1657 1664 util.unlink(self.wjoin(f))
1658 1665 except OSError, inst:
1659 1666 if inst.errno != errno.ENOENT:
1660 1667 self.ui.warn(_("update failed to remove %s: %s!\n") %
1661 1668 (f, inst.strerror))
1662 1669 if moddirstate:
1663 1670 if branch_merge:
1664 1671 self.dirstate.update(remove, 'r')
1665 1672 else:
1666 1673 self.dirstate.forget(remove)
1667 1674
1668 1675 if moddirstate:
1669 1676 self.dirstate.setparents(p1, p2)
1670 1677 return err
1671 1678
1672 1679 def merge3(self, fn, my, other):
1673 1680 """perform a 3-way merge in the working directory"""
1674 1681
1675 1682 def temp(prefix, node):
1676 1683 pre = "%s~%s." % (os.path.basename(fn), prefix)
1677 1684 (fd, name) = tempfile.mkstemp("", pre)
1678 1685 f = os.fdopen(fd, "wb")
1679 1686 self.wwrite(fn, fl.read(node), f)
1680 1687 f.close()
1681 1688 return name
1682 1689
1683 1690 fl = self.file(fn)
1684 1691 base = fl.ancestor(my, other)
1685 1692 a = self.wjoin(fn)
1686 1693 b = temp("base", base)
1687 1694 c = temp("other", other)
1688 1695
1689 1696 self.ui.note(_("resolving %s\n") % fn)
1690 1697 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1691 1698 (fn, short(my), short(other), short(base)))
1692 1699
1693 1700 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1694 1701 or "hgmerge")
1695 1702 r = os.system('%s "%s" "%s" "%s"' % (cmd, a, b, c))
1696 1703 if r:
1697 1704 self.ui.warn(_("merging %s failed!\n") % fn)
1698 1705
1699 1706 os.unlink(b)
1700 1707 os.unlink(c)
1701 1708 return r
1702 1709
1703 1710 def verify(self):
1704 1711 filelinkrevs = {}
1705 1712 filenodes = {}
1706 1713 changesets = revisions = files = 0
1707 1714 errors = [0]
1708 1715 neededmanifests = {}
1709 1716
1710 1717 def err(msg):
1711 1718 self.ui.warn(msg + "\n")
1712 1719 errors[0] += 1
1713 1720
1714 1721 def checksize(obj, name):
1715 1722 d = obj.checksize()
1716 1723 if d[0]:
1717 1724 err(_("%s data length off by %d bytes") % (name, d[0]))
1718 1725 if d[1]:
1719 1726 err(_("%s index contains %d extra bytes") % (name, d[1]))
1720 1727
1721 1728 seen = {}
1722 1729 self.ui.status(_("checking changesets\n"))
1723 1730 checksize(self.changelog, "changelog")
1724 1731
1725 1732 for i in range(self.changelog.count()):
1726 1733 changesets += 1
1727 1734 n = self.changelog.node(i)
1728 1735 l = self.changelog.linkrev(n)
1729 1736 if l != i:
1730 1737 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
1731 1738 if n in seen:
1732 1739 err(_("duplicate changeset at revision %d") % i)
1733 1740 seen[n] = 1
1734 1741
1735 1742 for p in self.changelog.parents(n):
1736 1743 if p not in self.changelog.nodemap:
1737 1744 err(_("changeset %s has unknown parent %s") %
1738 1745 (short(n), short(p)))
1739 1746 try:
1740 1747 changes = self.changelog.read(n)
1741 1748 except KeyboardInterrupt:
1742 1749 self.ui.warn(_("interrupted"))
1743 1750 raise
1744 1751 except Exception, inst:
1745 1752 err(_("unpacking changeset %s: %s") % (short(n), inst))
1746 1753
1747 1754 neededmanifests[changes[0]] = n
1748 1755
1749 1756 for f in changes[3]:
1750 1757 filelinkrevs.setdefault(f, []).append(i)
1751 1758
1752 1759 seen = {}
1753 1760 self.ui.status(_("checking manifests\n"))
1754 1761 checksize(self.manifest, "manifest")
1755 1762
1756 1763 for i in range(self.manifest.count()):
1757 1764 n = self.manifest.node(i)
1758 1765 l = self.manifest.linkrev(n)
1759 1766
1760 1767 if l < 0 or l >= self.changelog.count():
1761 1768 err(_("bad manifest link (%d) at revision %d") % (l, i))
1762 1769
1763 1770 if n in neededmanifests:
1764 1771 del neededmanifests[n]
1765 1772
1766 1773 if n in seen:
1767 1774 err(_("duplicate manifest at revision %d") % i)
1768 1775
1769 1776 seen[n] = 1
1770 1777
1771 1778 for p in self.manifest.parents(n):
1772 1779 if p not in self.manifest.nodemap:
1773 1780 err(_("manifest %s has unknown parent %s") %
1774 1781 (short(n), short(p)))
1775 1782
1776 1783 try:
1777 1784 delta = mdiff.patchtext(self.manifest.delta(n))
1778 1785 except KeyboardInterrupt:
1779 1786 self.ui.warn(_("interrupted"))
1780 1787 raise
1781 1788 except Exception, inst:
1782 1789 err(_("unpacking manifest %s: %s") % (short(n), inst))
1783 1790
1784 1791 ff = [ l.split('\0') for l in delta.splitlines() ]
1785 1792 for f, fn in ff:
1786 1793 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
1787 1794
1788 1795 self.ui.status(_("crosschecking files in changesets and manifests\n"))
1789 1796
1790 1797 for m, c in neededmanifests.items():
1791 1798 err(_("Changeset %s refers to unknown manifest %s") %
1792 1799 (short(m), short(c)))
1793 1800 del neededmanifests
1794 1801
1795 1802 for f in filenodes:
1796 1803 if f not in filelinkrevs:
1797 1804 err(_("file %s in manifest but not in changesets") % f)
1798 1805
1799 1806 for f in filelinkrevs:
1800 1807 if f not in filenodes:
1801 1808 err(_("file %s in changeset but not in manifest") % f)
1802 1809
1803 1810 self.ui.status(_("checking files\n"))
1804 1811 ff = filenodes.keys()
1805 1812 ff.sort()
1806 1813 for f in ff:
1807 1814 if f == "/dev/null":
1808 1815 continue
1809 1816 files += 1
1810 1817 fl = self.file(f)
1811 1818 checksize(fl, f)
1812 1819
1813 1820 nodes = {nullid: 1}
1814 1821 seen = {}
1815 1822 for i in range(fl.count()):
1816 1823 revisions += 1
1817 1824 n = fl.node(i)
1818 1825
1819 1826 if n in seen:
1820 1827 err(_("%s: duplicate revision %d") % (f, i))
1821 1828 if n not in filenodes[f]:
1822 1829 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
1823 1830 else:
1824 1831 del filenodes[f][n]
1825 1832
1826 1833 flr = fl.linkrev(n)
1827 1834 if flr not in filelinkrevs[f]:
1828 1835 err(_("%s:%s points to unexpected changeset %d")
1829 1836 % (f, short(n), flr))
1830 1837 else:
1831 1838 filelinkrevs[f].remove(flr)
1832 1839
1833 1840 # verify contents
1834 1841 try:
1835 1842 t = fl.read(n)
1836 1843 except KeyboardInterrupt:
1837 1844 self.ui.warn(_("interrupted"))
1838 1845 raise
1839 1846 except Exception, inst:
1840 1847 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
1841 1848
1842 1849 # verify parents
1843 1850 (p1, p2) = fl.parents(n)
1844 1851 if p1 not in nodes:
1845 1852 err(_("file %s:%s unknown parent 1 %s") %
1846 1853 (f, short(n), short(p1)))
1847 1854 if p2 not in nodes:
1848 1855 err(_("file %s:%s unknown parent 2 %s") %
1849 1856 (f, short(n), short(p1)))
1850 1857 nodes[n] = 1
1851 1858
1852 1859 # cross-check
1853 1860 for node in filenodes[f]:
1854 1861 err(_("node %s in manifests not in %s") % (hex(node), f))
1855 1862
1856 1863 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
1857 1864 (files, changesets, revisions))
1858 1865
1859 1866 if errors[0]:
1860 1867 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
1861 1868 return 1
@@ -1,59 +1,62 b''
1 1 # lock.py - simple locking scheme for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 import errno, os, time
9 9 import util
10 10
11 11 class LockException(Exception):
12 12 pass
13 13 class LockHeld(LockException):
14 14 pass
15 15 class LockUnavailable(LockException):
16 16 pass
17 17
18 18 class lock(object):
19 def __init__(self, file, wait=1, releasefn=None):
19 def __init__(self, file, timeout=-1, releasefn=None):
20 20 self.f = file
21 21 self.held = 0
22 self.wait = wait
22 self.timeout = timeout
23 23 self.releasefn = releasefn
24 24 self.lock()
25 25
26 26 def __del__(self):
27 27 self.release()
28 28
29 29 def lock(self):
30 timeout = self.timeout
30 31 while 1:
31 32 try:
32 33 self.trylock()
33 34 return 1
34 35 except LockHeld, inst:
35 if self.wait:
36 if timeout != 0:
36 37 time.sleep(1)
38 if timeout > 0:
39 timeout -= 1
37 40 continue
38 41 raise inst
39 42
40 43 def trylock(self):
41 44 pid = os.getpid()
42 45 try:
43 46 util.makelock(str(pid), self.f)
44 47 self.held = 1
45 48 except (OSError, IOError), why:
46 49 if why.errno == errno.EEXIST:
47 50 raise LockHeld(util.readlock(self.f))
48 51 else:
49 52 raise LockUnavailable(why)
50 53
51 54 def release(self):
52 55 if self.held:
53 56 self.held = 0
54 57 if self.releasefn:
55 58 self.releasefn()
56 59 try:
57 60 os.unlink(self.f)
58 61 except: pass
59 62
General Comments 0
You need to be logged in to leave comments. Login now