##// END OF EJS Templates
patch: generalize the use of patchmeta in applydiff()...
Patrick Mezard -
r14566:d0c2cc11 default
parent child Browse files
Show More
@@ -1,692 +1,690 b''
1 # keyword.py - $Keyword$ expansion for Mercurial
1 # keyword.py - $Keyword$ expansion for Mercurial
2 #
2 #
3 # Copyright 2007-2010 Christian Ebert <blacktrash@gmx.net>
3 # Copyright 2007-2010 Christian Ebert <blacktrash@gmx.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 #
7 #
8 # $Id$
8 # $Id$
9 #
9 #
10 # Keyword expansion hack against the grain of a DSCM
10 # Keyword expansion hack against the grain of a DSCM
11 #
11 #
12 # There are many good reasons why this is not needed in a distributed
12 # There are many good reasons why this is not needed in a distributed
13 # SCM, still it may be useful in very small projects based on single
13 # SCM, still it may be useful in very small projects based on single
14 # files (like LaTeX packages), that are mostly addressed to an
14 # files (like LaTeX packages), that are mostly addressed to an
15 # audience not running a version control system.
15 # audience not running a version control system.
16 #
16 #
17 # For in-depth discussion refer to
17 # For in-depth discussion refer to
18 # <http://mercurial.selenic.com/wiki/KeywordPlan>.
18 # <http://mercurial.selenic.com/wiki/KeywordPlan>.
19 #
19 #
20 # Keyword expansion is based on Mercurial's changeset template mappings.
20 # Keyword expansion is based on Mercurial's changeset template mappings.
21 #
21 #
22 # Binary files are not touched.
22 # Binary files are not touched.
23 #
23 #
24 # Files to act upon/ignore are specified in the [keyword] section.
24 # Files to act upon/ignore are specified in the [keyword] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
26 #
26 #
27 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
27 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
28
28
29 '''expand keywords in tracked files
29 '''expand keywords in tracked files
30
30
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
32 tracked text files selected by your configuration.
32 tracked text files selected by your configuration.
33
33
34 Keywords are only expanded in local repositories and not stored in the
34 Keywords are only expanded in local repositories and not stored in the
35 change history. The mechanism can be regarded as a convenience for the
35 change history. The mechanism can be regarded as a convenience for the
36 current user or for archive distribution.
36 current user or for archive distribution.
37
37
38 Keywords expand to the changeset data pertaining to the latest change
38 Keywords expand to the changeset data pertaining to the latest change
39 relative to the working directory parent of each file.
39 relative to the working directory parent of each file.
40
40
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
42 sections of hgrc files.
42 sections of hgrc files.
43
43
44 Example::
44 Example::
45
45
46 [keyword]
46 [keyword]
47 # expand keywords in every python file except those matching "x*"
47 # expand keywords in every python file except those matching "x*"
48 **.py =
48 **.py =
49 x* = ignore
49 x* = ignore
50
50
51 [keywordset]
51 [keywordset]
52 # prefer svn- over cvs-like default keywordmaps
52 # prefer svn- over cvs-like default keywordmaps
53 svn = True
53 svn = True
54
54
55 .. note::
55 .. note::
56 The more specific you are in your filename patterns the less you
56 The more specific you are in your filename patterns the less you
57 lose speed in huge repositories.
57 lose speed in huge repositories.
58
58
59 For [keywordmaps] template mapping and expansion demonstration and
59 For [keywordmaps] template mapping and expansion demonstration and
60 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
60 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
61 available templates and filters.
61 available templates and filters.
62
62
63 Three additional date template filters are provided:
63 Three additional date template filters are provided:
64
64
65 :``utcdate``: "2006/09/18 15:13:13"
65 :``utcdate``: "2006/09/18 15:13:13"
66 :``svnutcdate``: "2006-09-18 15:13:13Z"
66 :``svnutcdate``: "2006-09-18 15:13:13Z"
67 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
67 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
68
68
69 The default template mappings (view with :hg:`kwdemo -d`) can be
69 The default template mappings (view with :hg:`kwdemo -d`) can be
70 replaced with customized keywords and templates. Again, run
70 replaced with customized keywords and templates. Again, run
71 :hg:`kwdemo` to control the results of your configuration changes.
71 :hg:`kwdemo` to control the results of your configuration changes.
72
72
73 Before changing/disabling active keywords, you must run :hg:`kwshrink`
73 Before changing/disabling active keywords, you must run :hg:`kwshrink`
74 to avoid storing expanded keywords in the change history.
74 to avoid storing expanded keywords in the change history.
75
75
76 To force expansion after enabling it, or a configuration change, run
76 To force expansion after enabling it, or a configuration change, run
77 :hg:`kwexpand`.
77 :hg:`kwexpand`.
78
78
79 Expansions spanning more than one line and incremental expansions,
79 Expansions spanning more than one line and incremental expansions,
80 like CVS' $Log$, are not supported. A keyword template map "Log =
80 like CVS' $Log$, are not supported. A keyword template map "Log =
81 {desc}" expands to the first line of the changeset description.
81 {desc}" expands to the first line of the changeset description.
82 '''
82 '''
83
83
84 from mercurial import commands, context, cmdutil, dispatch, filelog, extensions
84 from mercurial import commands, context, cmdutil, dispatch, filelog, extensions
85 from mercurial import localrepo, match, patch, templatefilters, templater, util
85 from mercurial import localrepo, match, patch, templatefilters, templater, util
86 from mercurial import scmutil
86 from mercurial import scmutil
87 from mercurial.hgweb import webcommands
87 from mercurial.hgweb import webcommands
88 from mercurial.i18n import _
88 from mercurial.i18n import _
89 import os, re, shutil, tempfile
89 import os, re, shutil, tempfile
90
90
91 commands.optionalrepo += ' kwdemo'
91 commands.optionalrepo += ' kwdemo'
92
92
93 cmdtable = {}
93 cmdtable = {}
94 command = cmdutil.command(cmdtable)
94 command = cmdutil.command(cmdtable)
95
95
96 # hg commands that do not act on keywords
96 # hg commands that do not act on keywords
97 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
97 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
98 ' outgoing push tip verify convert email glog')
98 ' outgoing push tip verify convert email glog')
99
99
100 # hg commands that trigger expansion only when writing to working dir,
100 # hg commands that trigger expansion only when writing to working dir,
101 # not when reading filelog, and unexpand when reading from working dir
101 # not when reading filelog, and unexpand when reading from working dir
102 restricted = 'merge kwexpand kwshrink record qrecord resolve transplant'
102 restricted = 'merge kwexpand kwshrink record qrecord resolve transplant'
103
103
104 # names of extensions using dorecord
104 # names of extensions using dorecord
105 recordextensions = 'record'
105 recordextensions = 'record'
106
106
107 colortable = {
107 colortable = {
108 'kwfiles.enabled': 'green bold',
108 'kwfiles.enabled': 'green bold',
109 'kwfiles.deleted': 'cyan bold underline',
109 'kwfiles.deleted': 'cyan bold underline',
110 'kwfiles.enabledunknown': 'green',
110 'kwfiles.enabledunknown': 'green',
111 'kwfiles.ignored': 'bold',
111 'kwfiles.ignored': 'bold',
112 'kwfiles.ignoredunknown': 'none'
112 'kwfiles.ignoredunknown': 'none'
113 }
113 }
114
114
115 # date like in cvs' $Date
115 # date like in cvs' $Date
116 def utcdate(text):
116 def utcdate(text):
117 ''':utcdate: Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
117 ''':utcdate: Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
118 '''
118 '''
119 return util.datestr((text[0], 0), '%Y/%m/%d %H:%M:%S')
119 return util.datestr((text[0], 0), '%Y/%m/%d %H:%M:%S')
120 # date like in svn's $Date
120 # date like in svn's $Date
121 def svnisodate(text):
121 def svnisodate(text):
122 ''':svnisodate: Date. Returns a date in this format: "2009-08-18 13:00:13
122 ''':svnisodate: Date. Returns a date in this format: "2009-08-18 13:00:13
123 +0200 (Tue, 18 Aug 2009)".
123 +0200 (Tue, 18 Aug 2009)".
124 '''
124 '''
125 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
125 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
126 # date like in svn's $Id
126 # date like in svn's $Id
127 def svnutcdate(text):
127 def svnutcdate(text):
128 ''':svnutcdate: Date. Returns a UTC-date in this format: "2009-08-18
128 ''':svnutcdate: Date. Returns a UTC-date in this format: "2009-08-18
129 11:00:13Z".
129 11:00:13Z".
130 '''
130 '''
131 return util.datestr((text[0], 0), '%Y-%m-%d %H:%M:%SZ')
131 return util.datestr((text[0], 0), '%Y-%m-%d %H:%M:%SZ')
132
132
133 templatefilters.filters.update({'utcdate': utcdate,
133 templatefilters.filters.update({'utcdate': utcdate,
134 'svnisodate': svnisodate,
134 'svnisodate': svnisodate,
135 'svnutcdate': svnutcdate})
135 'svnutcdate': svnutcdate})
136
136
137 # make keyword tools accessible
137 # make keyword tools accessible
138 kwtools = {'templater': None, 'hgcmd': ''}
138 kwtools = {'templater': None, 'hgcmd': ''}
139
139
140 def _defaultkwmaps(ui):
140 def _defaultkwmaps(ui):
141 '''Returns default keywordmaps according to keywordset configuration.'''
141 '''Returns default keywordmaps according to keywordset configuration.'''
142 templates = {
142 templates = {
143 'Revision': '{node|short}',
143 'Revision': '{node|short}',
144 'Author': '{author|user}',
144 'Author': '{author|user}',
145 }
145 }
146 kwsets = ({
146 kwsets = ({
147 'Date': '{date|utcdate}',
147 'Date': '{date|utcdate}',
148 'RCSfile': '{file|basename},v',
148 'RCSfile': '{file|basename},v',
149 'RCSFile': '{file|basename},v', # kept for backwards compatibility
149 'RCSFile': '{file|basename},v', # kept for backwards compatibility
150 # with hg-keyword
150 # with hg-keyword
151 'Source': '{root}/{file},v',
151 'Source': '{root}/{file},v',
152 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
152 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
153 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
153 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
154 }, {
154 }, {
155 'Date': '{date|svnisodate}',
155 'Date': '{date|svnisodate}',
156 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
156 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
157 'LastChangedRevision': '{node|short}',
157 'LastChangedRevision': '{node|short}',
158 'LastChangedBy': '{author|user}',
158 'LastChangedBy': '{author|user}',
159 'LastChangedDate': '{date|svnisodate}',
159 'LastChangedDate': '{date|svnisodate}',
160 })
160 })
161 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
161 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
162 return templates
162 return templates
163
163
164 def _shrinktext(text, subfunc):
164 def _shrinktext(text, subfunc):
165 '''Helper for keyword expansion removal in text.
165 '''Helper for keyword expansion removal in text.
166 Depending on subfunc also returns number of substitutions.'''
166 Depending on subfunc also returns number of substitutions.'''
167 return subfunc(r'$\1$', text)
167 return subfunc(r'$\1$', text)
168
168
169 def _preselect(wstatus, changed):
169 def _preselect(wstatus, changed):
170 '''Retrieves modfied and added files from a working directory state
170 '''Retrieves modfied and added files from a working directory state
171 and returns the subset of each contained in given changed files
171 and returns the subset of each contained in given changed files
172 retrieved from a change context.'''
172 retrieved from a change context.'''
173 modified, added = wstatus[:2]
173 modified, added = wstatus[:2]
174 modified = [f for f in modified if f in changed]
174 modified = [f for f in modified if f in changed]
175 added = [f for f in added if f in changed]
175 added = [f for f in added if f in changed]
176 return modified, added
176 return modified, added
177
177
178
178
179 class kwtemplater(object):
179 class kwtemplater(object):
180 '''
180 '''
181 Sets up keyword templates, corresponding keyword regex, and
181 Sets up keyword templates, corresponding keyword regex, and
182 provides keyword substitution functions.
182 provides keyword substitution functions.
183 '''
183 '''
184
184
185 def __init__(self, ui, repo, inc, exc):
185 def __init__(self, ui, repo, inc, exc):
186 self.ui = ui
186 self.ui = ui
187 self.repo = repo
187 self.repo = repo
188 self.match = match.match(repo.root, '', [], inc, exc)
188 self.match = match.match(repo.root, '', [], inc, exc)
189 self.restrict = kwtools['hgcmd'] in restricted.split()
189 self.restrict = kwtools['hgcmd'] in restricted.split()
190 self.record = False
190 self.record = False
191
191
192 kwmaps = self.ui.configitems('keywordmaps')
192 kwmaps = self.ui.configitems('keywordmaps')
193 if kwmaps: # override default templates
193 if kwmaps: # override default templates
194 self.templates = dict((k, templater.parsestring(v, False))
194 self.templates = dict((k, templater.parsestring(v, False))
195 for k, v in kwmaps)
195 for k, v in kwmaps)
196 else:
196 else:
197 self.templates = _defaultkwmaps(self.ui)
197 self.templates = _defaultkwmaps(self.ui)
198
198
199 @util.propertycache
199 @util.propertycache
200 def escape(self):
200 def escape(self):
201 '''Returns bar-separated and escaped keywords.'''
201 '''Returns bar-separated and escaped keywords.'''
202 return '|'.join(map(re.escape, self.templates.keys()))
202 return '|'.join(map(re.escape, self.templates.keys()))
203
203
204 @util.propertycache
204 @util.propertycache
205 def rekw(self):
205 def rekw(self):
206 '''Returns regex for unexpanded keywords.'''
206 '''Returns regex for unexpanded keywords.'''
207 return re.compile(r'\$(%s)\$' % self.escape)
207 return re.compile(r'\$(%s)\$' % self.escape)
208
208
209 @util.propertycache
209 @util.propertycache
210 def rekwexp(self):
210 def rekwexp(self):
211 '''Returns regex for expanded keywords.'''
211 '''Returns regex for expanded keywords.'''
212 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
212 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
213
213
214 def substitute(self, data, path, ctx, subfunc):
214 def substitute(self, data, path, ctx, subfunc):
215 '''Replaces keywords in data with expanded template.'''
215 '''Replaces keywords in data with expanded template.'''
216 def kwsub(mobj):
216 def kwsub(mobj):
217 kw = mobj.group(1)
217 kw = mobj.group(1)
218 ct = cmdutil.changeset_templater(self.ui, self.repo,
218 ct = cmdutil.changeset_templater(self.ui, self.repo,
219 False, None, '', False)
219 False, None, '', False)
220 ct.use_template(self.templates[kw])
220 ct.use_template(self.templates[kw])
221 self.ui.pushbuffer()
221 self.ui.pushbuffer()
222 ct.show(ctx, root=self.repo.root, file=path)
222 ct.show(ctx, root=self.repo.root, file=path)
223 ekw = templatefilters.firstline(self.ui.popbuffer())
223 ekw = templatefilters.firstline(self.ui.popbuffer())
224 return '$%s: %s $' % (kw, ekw)
224 return '$%s: %s $' % (kw, ekw)
225 return subfunc(kwsub, data)
225 return subfunc(kwsub, data)
226
226
227 def linkctx(self, path, fileid):
227 def linkctx(self, path, fileid):
228 '''Similar to filelog.linkrev, but returns a changectx.'''
228 '''Similar to filelog.linkrev, but returns a changectx.'''
229 return self.repo.filectx(path, fileid=fileid).changectx()
229 return self.repo.filectx(path, fileid=fileid).changectx()
230
230
231 def expand(self, path, node, data):
231 def expand(self, path, node, data):
232 '''Returns data with keywords expanded.'''
232 '''Returns data with keywords expanded.'''
233 if not self.restrict and self.match(path) and not util.binary(data):
233 if not self.restrict and self.match(path) and not util.binary(data):
234 ctx = self.linkctx(path, node)
234 ctx = self.linkctx(path, node)
235 return self.substitute(data, path, ctx, self.rekw.sub)
235 return self.substitute(data, path, ctx, self.rekw.sub)
236 return data
236 return data
237
237
238 def iskwfile(self, cand, ctx):
238 def iskwfile(self, cand, ctx):
239 '''Returns subset of candidates which are configured for keyword
239 '''Returns subset of candidates which are configured for keyword
240 expansion are not symbolic links.'''
240 expansion are not symbolic links.'''
241 return [f for f in cand if self.match(f) and not 'l' in ctx.flags(f)]
241 return [f for f in cand if self.match(f) and not 'l' in ctx.flags(f)]
242
242
243 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
243 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
244 '''Overwrites selected files expanding/shrinking keywords.'''
244 '''Overwrites selected files expanding/shrinking keywords.'''
245 if self.restrict or lookup or self.record: # exclude kw_copy
245 if self.restrict or lookup or self.record: # exclude kw_copy
246 candidates = self.iskwfile(candidates, ctx)
246 candidates = self.iskwfile(candidates, ctx)
247 if not candidates:
247 if not candidates:
248 return
248 return
249 kwcmd = self.restrict and lookup # kwexpand/kwshrink
249 kwcmd = self.restrict and lookup # kwexpand/kwshrink
250 if self.restrict or expand and lookup:
250 if self.restrict or expand and lookup:
251 mf = ctx.manifest()
251 mf = ctx.manifest()
252 lctx = ctx
252 lctx = ctx
253 re_kw = (self.restrict or rekw) and self.rekw or self.rekwexp
253 re_kw = (self.restrict or rekw) and self.rekw or self.rekwexp
254 msg = (expand and _('overwriting %s expanding keywords\n')
254 msg = (expand and _('overwriting %s expanding keywords\n')
255 or _('overwriting %s shrinking keywords\n'))
255 or _('overwriting %s shrinking keywords\n'))
256 for f in candidates:
256 for f in candidates:
257 if self.restrict:
257 if self.restrict:
258 data = self.repo.file(f).read(mf[f])
258 data = self.repo.file(f).read(mf[f])
259 else:
259 else:
260 data = self.repo.wread(f)
260 data = self.repo.wread(f)
261 if util.binary(data):
261 if util.binary(data):
262 continue
262 continue
263 if expand:
263 if expand:
264 if lookup:
264 if lookup:
265 lctx = self.linkctx(f, mf[f])
265 lctx = self.linkctx(f, mf[f])
266 data, found = self.substitute(data, f, lctx, re_kw.subn)
266 data, found = self.substitute(data, f, lctx, re_kw.subn)
267 elif self.restrict:
267 elif self.restrict:
268 found = re_kw.search(data)
268 found = re_kw.search(data)
269 else:
269 else:
270 data, found = _shrinktext(data, re_kw.subn)
270 data, found = _shrinktext(data, re_kw.subn)
271 if found:
271 if found:
272 self.ui.note(msg % f)
272 self.ui.note(msg % f)
273 self.repo.wwrite(f, data, ctx.flags(f))
273 self.repo.wwrite(f, data, ctx.flags(f))
274 if kwcmd:
274 if kwcmd:
275 self.repo.dirstate.normal(f)
275 self.repo.dirstate.normal(f)
276 elif self.record:
276 elif self.record:
277 self.repo.dirstate.normallookup(f)
277 self.repo.dirstate.normallookup(f)
278
278
279 def shrink(self, fname, text):
279 def shrink(self, fname, text):
280 '''Returns text with all keyword substitutions removed.'''
280 '''Returns text with all keyword substitutions removed.'''
281 if self.match(fname) and not util.binary(text):
281 if self.match(fname) and not util.binary(text):
282 return _shrinktext(text, self.rekwexp.sub)
282 return _shrinktext(text, self.rekwexp.sub)
283 return text
283 return text
284
284
285 def shrinklines(self, fname, lines):
285 def shrinklines(self, fname, lines):
286 '''Returns lines with keyword substitutions removed.'''
286 '''Returns lines with keyword substitutions removed.'''
287 if self.match(fname):
287 if self.match(fname):
288 text = ''.join(lines)
288 text = ''.join(lines)
289 if not util.binary(text):
289 if not util.binary(text):
290 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
290 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
291 return lines
291 return lines
292
292
293 def wread(self, fname, data):
293 def wread(self, fname, data):
294 '''If in restricted mode returns data read from wdir with
294 '''If in restricted mode returns data read from wdir with
295 keyword substitutions removed.'''
295 keyword substitutions removed.'''
296 return self.restrict and self.shrink(fname, data) or data
296 return self.restrict and self.shrink(fname, data) or data
297
297
298 class kwfilelog(filelog.filelog):
298 class kwfilelog(filelog.filelog):
299 '''
299 '''
300 Subclass of filelog to hook into its read, add, cmp methods.
300 Subclass of filelog to hook into its read, add, cmp methods.
301 Keywords are "stored" unexpanded, and processed on reading.
301 Keywords are "stored" unexpanded, and processed on reading.
302 '''
302 '''
303 def __init__(self, opener, kwt, path):
303 def __init__(self, opener, kwt, path):
304 super(kwfilelog, self).__init__(opener, path)
304 super(kwfilelog, self).__init__(opener, path)
305 self.kwt = kwt
305 self.kwt = kwt
306 self.path = path
306 self.path = path
307
307
308 def read(self, node):
308 def read(self, node):
309 '''Expands keywords when reading filelog.'''
309 '''Expands keywords when reading filelog.'''
310 data = super(kwfilelog, self).read(node)
310 data = super(kwfilelog, self).read(node)
311 if self.renamed(node):
311 if self.renamed(node):
312 return data
312 return data
313 return self.kwt.expand(self.path, node, data)
313 return self.kwt.expand(self.path, node, data)
314
314
315 def add(self, text, meta, tr, link, p1=None, p2=None):
315 def add(self, text, meta, tr, link, p1=None, p2=None):
316 '''Removes keyword substitutions when adding to filelog.'''
316 '''Removes keyword substitutions when adding to filelog.'''
317 text = self.kwt.shrink(self.path, text)
317 text = self.kwt.shrink(self.path, text)
318 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
318 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
319
319
320 def cmp(self, node, text):
320 def cmp(self, node, text):
321 '''Removes keyword substitutions for comparison.'''
321 '''Removes keyword substitutions for comparison.'''
322 text = self.kwt.shrink(self.path, text)
322 text = self.kwt.shrink(self.path, text)
323 return super(kwfilelog, self).cmp(node, text)
323 return super(kwfilelog, self).cmp(node, text)
324
324
325 def _status(ui, repo, kwt, *pats, **opts):
325 def _status(ui, repo, kwt, *pats, **opts):
326 '''Bails out if [keyword] configuration is not active.
326 '''Bails out if [keyword] configuration is not active.
327 Returns status of working directory.'''
327 Returns status of working directory.'''
328 if kwt:
328 if kwt:
329 return repo.status(match=scmutil.match(repo, pats, opts), clean=True,
329 return repo.status(match=scmutil.match(repo, pats, opts), clean=True,
330 unknown=opts.get('unknown') or opts.get('all'))
330 unknown=opts.get('unknown') or opts.get('all'))
331 if ui.configitems('keyword'):
331 if ui.configitems('keyword'):
332 raise util.Abort(_('[keyword] patterns cannot match'))
332 raise util.Abort(_('[keyword] patterns cannot match'))
333 raise util.Abort(_('no [keyword] patterns configured'))
333 raise util.Abort(_('no [keyword] patterns configured'))
334
334
335 def _kwfwrite(ui, repo, expand, *pats, **opts):
335 def _kwfwrite(ui, repo, expand, *pats, **opts):
336 '''Selects files and passes them to kwtemplater.overwrite.'''
336 '''Selects files and passes them to kwtemplater.overwrite.'''
337 wctx = repo[None]
337 wctx = repo[None]
338 if len(wctx.parents()) > 1:
338 if len(wctx.parents()) > 1:
339 raise util.Abort(_('outstanding uncommitted merge'))
339 raise util.Abort(_('outstanding uncommitted merge'))
340 kwt = kwtools['templater']
340 kwt = kwtools['templater']
341 wlock = repo.wlock()
341 wlock = repo.wlock()
342 try:
342 try:
343 status = _status(ui, repo, kwt, *pats, **opts)
343 status = _status(ui, repo, kwt, *pats, **opts)
344 modified, added, removed, deleted, unknown, ignored, clean = status
344 modified, added, removed, deleted, unknown, ignored, clean = status
345 if modified or added or removed or deleted:
345 if modified or added or removed or deleted:
346 raise util.Abort(_('outstanding uncommitted changes'))
346 raise util.Abort(_('outstanding uncommitted changes'))
347 kwt.overwrite(wctx, clean, True, expand)
347 kwt.overwrite(wctx, clean, True, expand)
348 finally:
348 finally:
349 wlock.release()
349 wlock.release()
350
350
351 @command('kwdemo',
351 @command('kwdemo',
352 [('d', 'default', None, _('show default keyword template maps')),
352 [('d', 'default', None, _('show default keyword template maps')),
353 ('f', 'rcfile', '',
353 ('f', 'rcfile', '',
354 _('read maps from rcfile'), _('FILE'))],
354 _('read maps from rcfile'), _('FILE'))],
355 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'))
355 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'))
356 def demo(ui, repo, *args, **opts):
356 def demo(ui, repo, *args, **opts):
357 '''print [keywordmaps] configuration and an expansion example
357 '''print [keywordmaps] configuration and an expansion example
358
358
359 Show current, custom, or default keyword template maps and their
359 Show current, custom, or default keyword template maps and their
360 expansions.
360 expansions.
361
361
362 Extend the current configuration by specifying maps as arguments
362 Extend the current configuration by specifying maps as arguments
363 and using -f/--rcfile to source an external hgrc file.
363 and using -f/--rcfile to source an external hgrc file.
364
364
365 Use -d/--default to disable current configuration.
365 Use -d/--default to disable current configuration.
366
366
367 See :hg:`help templates` for information on templates and filters.
367 See :hg:`help templates` for information on templates and filters.
368 '''
368 '''
369 def demoitems(section, items):
369 def demoitems(section, items):
370 ui.write('[%s]\n' % section)
370 ui.write('[%s]\n' % section)
371 for k, v in sorted(items):
371 for k, v in sorted(items):
372 ui.write('%s = %s\n' % (k, v))
372 ui.write('%s = %s\n' % (k, v))
373
373
374 fn = 'demo.txt'
374 fn = 'demo.txt'
375 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
375 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
376 ui.note(_('creating temporary repository at %s\n') % tmpdir)
376 ui.note(_('creating temporary repository at %s\n') % tmpdir)
377 repo = localrepo.localrepository(ui, tmpdir, True)
377 repo = localrepo.localrepository(ui, tmpdir, True)
378 ui.setconfig('keyword', fn, '')
378 ui.setconfig('keyword', fn, '')
379 svn = ui.configbool('keywordset', 'svn')
379 svn = ui.configbool('keywordset', 'svn')
380 # explicitly set keywordset for demo output
380 # explicitly set keywordset for demo output
381 ui.setconfig('keywordset', 'svn', svn)
381 ui.setconfig('keywordset', 'svn', svn)
382
382
383 uikwmaps = ui.configitems('keywordmaps')
383 uikwmaps = ui.configitems('keywordmaps')
384 if args or opts.get('rcfile'):
384 if args or opts.get('rcfile'):
385 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
385 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
386 if uikwmaps:
386 if uikwmaps:
387 ui.status(_('\textending current template maps\n'))
387 ui.status(_('\textending current template maps\n'))
388 if opts.get('default') or not uikwmaps:
388 if opts.get('default') or not uikwmaps:
389 if svn:
389 if svn:
390 ui.status(_('\toverriding default svn keywordset\n'))
390 ui.status(_('\toverriding default svn keywordset\n'))
391 else:
391 else:
392 ui.status(_('\toverriding default cvs keywordset\n'))
392 ui.status(_('\toverriding default cvs keywordset\n'))
393 if opts.get('rcfile'):
393 if opts.get('rcfile'):
394 ui.readconfig(opts.get('rcfile'))
394 ui.readconfig(opts.get('rcfile'))
395 if args:
395 if args:
396 # simulate hgrc parsing
396 # simulate hgrc parsing
397 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
397 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
398 fp = repo.opener('hgrc', 'w')
398 fp = repo.opener('hgrc', 'w')
399 fp.writelines(rcmaps)
399 fp.writelines(rcmaps)
400 fp.close()
400 fp.close()
401 ui.readconfig(repo.join('hgrc'))
401 ui.readconfig(repo.join('hgrc'))
402 kwmaps = dict(ui.configitems('keywordmaps'))
402 kwmaps = dict(ui.configitems('keywordmaps'))
403 elif opts.get('default'):
403 elif opts.get('default'):
404 if svn:
404 if svn:
405 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
405 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
406 else:
406 else:
407 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
407 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
408 kwmaps = _defaultkwmaps(ui)
408 kwmaps = _defaultkwmaps(ui)
409 if uikwmaps:
409 if uikwmaps:
410 ui.status(_('\tdisabling current template maps\n'))
410 ui.status(_('\tdisabling current template maps\n'))
411 for k, v in kwmaps.iteritems():
411 for k, v in kwmaps.iteritems():
412 ui.setconfig('keywordmaps', k, v)
412 ui.setconfig('keywordmaps', k, v)
413 else:
413 else:
414 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
414 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
415 kwmaps = dict(uikwmaps) or _defaultkwmaps(ui)
415 kwmaps = dict(uikwmaps) or _defaultkwmaps(ui)
416
416
417 uisetup(ui)
417 uisetup(ui)
418 reposetup(ui, repo)
418 reposetup(ui, repo)
419 ui.write('[extensions]\nkeyword =\n')
419 ui.write('[extensions]\nkeyword =\n')
420 demoitems('keyword', ui.configitems('keyword'))
420 demoitems('keyword', ui.configitems('keyword'))
421 demoitems('keywordset', ui.configitems('keywordset'))
421 demoitems('keywordset', ui.configitems('keywordset'))
422 demoitems('keywordmaps', kwmaps.iteritems())
422 demoitems('keywordmaps', kwmaps.iteritems())
423 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
423 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
424 repo.wopener.write(fn, keywords)
424 repo.wopener.write(fn, keywords)
425 repo[None].add([fn])
425 repo[None].add([fn])
426 ui.note(_('\nkeywords written to %s:\n') % fn)
426 ui.note(_('\nkeywords written to %s:\n') % fn)
427 ui.note(keywords)
427 ui.note(keywords)
428 repo.dirstate.setbranch('demobranch')
428 repo.dirstate.setbranch('demobranch')
429 for name, cmd in ui.configitems('hooks'):
429 for name, cmd in ui.configitems('hooks'):
430 if name.split('.', 1)[0].find('commit') > -1:
430 if name.split('.', 1)[0].find('commit') > -1:
431 repo.ui.setconfig('hooks', name, '')
431 repo.ui.setconfig('hooks', name, '')
432 msg = _('hg keyword configuration and expansion example')
432 msg = _('hg keyword configuration and expansion example')
433 ui.note("hg ci -m '%s'\n" % msg)
433 ui.note("hg ci -m '%s'\n" % msg)
434 repo.commit(text=msg)
434 repo.commit(text=msg)
435 ui.status(_('\n\tkeywords expanded\n'))
435 ui.status(_('\n\tkeywords expanded\n'))
436 ui.write(repo.wread(fn))
436 ui.write(repo.wread(fn))
437 shutil.rmtree(tmpdir, ignore_errors=True)
437 shutil.rmtree(tmpdir, ignore_errors=True)
438
438
439 @command('kwexpand', commands.walkopts, _('hg kwexpand [OPTION]... [FILE]...'))
439 @command('kwexpand', commands.walkopts, _('hg kwexpand [OPTION]... [FILE]...'))
440 def expand(ui, repo, *pats, **opts):
440 def expand(ui, repo, *pats, **opts):
441 '''expand keywords in the working directory
441 '''expand keywords in the working directory
442
442
443 Run after (re)enabling keyword expansion.
443 Run after (re)enabling keyword expansion.
444
444
445 kwexpand refuses to run if given files contain local changes.
445 kwexpand refuses to run if given files contain local changes.
446 '''
446 '''
447 # 3rd argument sets expansion to True
447 # 3rd argument sets expansion to True
448 _kwfwrite(ui, repo, True, *pats, **opts)
448 _kwfwrite(ui, repo, True, *pats, **opts)
449
449
450 @command('kwfiles',
450 @command('kwfiles',
451 [('A', 'all', None, _('show keyword status flags of all files')),
451 [('A', 'all', None, _('show keyword status flags of all files')),
452 ('i', 'ignore', None, _('show files excluded from expansion')),
452 ('i', 'ignore', None, _('show files excluded from expansion')),
453 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
453 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
454 ] + commands.walkopts,
454 ] + commands.walkopts,
455 _('hg kwfiles [OPTION]... [FILE]...'))
455 _('hg kwfiles [OPTION]... [FILE]...'))
456 def files(ui, repo, *pats, **opts):
456 def files(ui, repo, *pats, **opts):
457 '''show files configured for keyword expansion
457 '''show files configured for keyword expansion
458
458
459 List which files in the working directory are matched by the
459 List which files in the working directory are matched by the
460 [keyword] configuration patterns.
460 [keyword] configuration patterns.
461
461
462 Useful to prevent inadvertent keyword expansion and to speed up
462 Useful to prevent inadvertent keyword expansion and to speed up
463 execution by including only files that are actual candidates for
463 execution by including only files that are actual candidates for
464 expansion.
464 expansion.
465
465
466 See :hg:`help keyword` on how to construct patterns both for
466 See :hg:`help keyword` on how to construct patterns both for
467 inclusion and exclusion of files.
467 inclusion and exclusion of files.
468
468
469 With -A/--all and -v/--verbose the codes used to show the status
469 With -A/--all and -v/--verbose the codes used to show the status
470 of files are::
470 of files are::
471
471
472 K = keyword expansion candidate
472 K = keyword expansion candidate
473 k = keyword expansion candidate (not tracked)
473 k = keyword expansion candidate (not tracked)
474 I = ignored
474 I = ignored
475 i = ignored (not tracked)
475 i = ignored (not tracked)
476 '''
476 '''
477 kwt = kwtools['templater']
477 kwt = kwtools['templater']
478 status = _status(ui, repo, kwt, *pats, **opts)
478 status = _status(ui, repo, kwt, *pats, **opts)
479 cwd = pats and repo.getcwd() or ''
479 cwd = pats and repo.getcwd() or ''
480 modified, added, removed, deleted, unknown, ignored, clean = status
480 modified, added, removed, deleted, unknown, ignored, clean = status
481 files = []
481 files = []
482 if not opts.get('unknown') or opts.get('all'):
482 if not opts.get('unknown') or opts.get('all'):
483 files = sorted(modified + added + clean)
483 files = sorted(modified + added + clean)
484 wctx = repo[None]
484 wctx = repo[None]
485 kwfiles = kwt.iskwfile(files, wctx)
485 kwfiles = kwt.iskwfile(files, wctx)
486 kwdeleted = kwt.iskwfile(deleted, wctx)
486 kwdeleted = kwt.iskwfile(deleted, wctx)
487 kwunknown = kwt.iskwfile(unknown, wctx)
487 kwunknown = kwt.iskwfile(unknown, wctx)
488 if not opts.get('ignore') or opts.get('all'):
488 if not opts.get('ignore') or opts.get('all'):
489 showfiles = kwfiles, kwdeleted, kwunknown
489 showfiles = kwfiles, kwdeleted, kwunknown
490 else:
490 else:
491 showfiles = [], [], []
491 showfiles = [], [], []
492 if opts.get('all') or opts.get('ignore'):
492 if opts.get('all') or opts.get('ignore'):
493 showfiles += ([f for f in files if f not in kwfiles],
493 showfiles += ([f for f in files if f not in kwfiles],
494 [f for f in unknown if f not in kwunknown])
494 [f for f in unknown if f not in kwunknown])
495 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
495 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
496 kwstates = zip('K!kIi', showfiles, kwlabels)
496 kwstates = zip('K!kIi', showfiles, kwlabels)
497 for char, filenames, kwstate in kwstates:
497 for char, filenames, kwstate in kwstates:
498 fmt = (opts.get('all') or ui.verbose) and '%s %%s\n' % char or '%s\n'
498 fmt = (opts.get('all') or ui.verbose) and '%s %%s\n' % char or '%s\n'
499 for f in filenames:
499 for f in filenames:
500 ui.write(fmt % repo.pathto(f, cwd), label='kwfiles.' + kwstate)
500 ui.write(fmt % repo.pathto(f, cwd), label='kwfiles.' + kwstate)
501
501
502 @command('kwshrink', commands.walkopts, _('hg kwshrink [OPTION]... [FILE]...'))
502 @command('kwshrink', commands.walkopts, _('hg kwshrink [OPTION]... [FILE]...'))
503 def shrink(ui, repo, *pats, **opts):
503 def shrink(ui, repo, *pats, **opts):
504 '''revert expanded keywords in the working directory
504 '''revert expanded keywords in the working directory
505
505
506 Must be run before changing/disabling active keywords.
506 Must be run before changing/disabling active keywords.
507
507
508 kwshrink refuses to run if given files contain local changes.
508 kwshrink refuses to run if given files contain local changes.
509 '''
509 '''
510 # 3rd argument sets expansion to False
510 # 3rd argument sets expansion to False
511 _kwfwrite(ui, repo, False, *pats, **opts)
511 _kwfwrite(ui, repo, False, *pats, **opts)
512
512
513
513
514 def uisetup(ui):
514 def uisetup(ui):
515 ''' Monkeypatches dispatch._parse to retrieve user command.'''
515 ''' Monkeypatches dispatch._parse to retrieve user command.'''
516
516
517 def kwdispatch_parse(orig, ui, args):
517 def kwdispatch_parse(orig, ui, args):
518 '''Monkeypatch dispatch._parse to obtain running hg command.'''
518 '''Monkeypatch dispatch._parse to obtain running hg command.'''
519 cmd, func, args, options, cmdoptions = orig(ui, args)
519 cmd, func, args, options, cmdoptions = orig(ui, args)
520 kwtools['hgcmd'] = cmd
520 kwtools['hgcmd'] = cmd
521 return cmd, func, args, options, cmdoptions
521 return cmd, func, args, options, cmdoptions
522
522
523 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
523 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
524
524
525 def reposetup(ui, repo):
525 def reposetup(ui, repo):
526 '''Sets up repo as kwrepo for keyword substitution.
526 '''Sets up repo as kwrepo for keyword substitution.
527 Overrides file method to return kwfilelog instead of filelog
527 Overrides file method to return kwfilelog instead of filelog
528 if file matches user configuration.
528 if file matches user configuration.
529 Wraps commit to overwrite configured files with updated
529 Wraps commit to overwrite configured files with updated
530 keyword substitutions.
530 keyword substitutions.
531 Monkeypatches patch and webcommands.'''
531 Monkeypatches patch and webcommands.'''
532
532
533 try:
533 try:
534 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
534 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
535 or '.hg' in util.splitpath(repo.root)
535 or '.hg' in util.splitpath(repo.root)
536 or repo._url.startswith('bundle:')):
536 or repo._url.startswith('bundle:')):
537 return
537 return
538 except AttributeError:
538 except AttributeError:
539 pass
539 pass
540
540
541 inc, exc = [], ['.hg*']
541 inc, exc = [], ['.hg*']
542 for pat, opt in ui.configitems('keyword'):
542 for pat, opt in ui.configitems('keyword'):
543 if opt != 'ignore':
543 if opt != 'ignore':
544 inc.append(pat)
544 inc.append(pat)
545 else:
545 else:
546 exc.append(pat)
546 exc.append(pat)
547 if not inc:
547 if not inc:
548 return
548 return
549
549
550 kwtools['templater'] = kwt = kwtemplater(ui, repo, inc, exc)
550 kwtools['templater'] = kwt = kwtemplater(ui, repo, inc, exc)
551
551
552 class kwrepo(repo.__class__):
552 class kwrepo(repo.__class__):
553 def file(self, f):
553 def file(self, f):
554 if f[0] == '/':
554 if f[0] == '/':
555 f = f[1:]
555 f = f[1:]
556 return kwfilelog(self.sopener, kwt, f)
556 return kwfilelog(self.sopener, kwt, f)
557
557
558 def wread(self, filename):
558 def wread(self, filename):
559 data = super(kwrepo, self).wread(filename)
559 data = super(kwrepo, self).wread(filename)
560 return kwt.wread(filename, data)
560 return kwt.wread(filename, data)
561
561
562 def commit(self, *args, **opts):
562 def commit(self, *args, **opts):
563 # use custom commitctx for user commands
563 # use custom commitctx for user commands
564 # other extensions can still wrap repo.commitctx directly
564 # other extensions can still wrap repo.commitctx directly
565 self.commitctx = self.kwcommitctx
565 self.commitctx = self.kwcommitctx
566 try:
566 try:
567 return super(kwrepo, self).commit(*args, **opts)
567 return super(kwrepo, self).commit(*args, **opts)
568 finally:
568 finally:
569 del self.commitctx
569 del self.commitctx
570
570
571 def kwcommitctx(self, ctx, error=False):
571 def kwcommitctx(self, ctx, error=False):
572 n = super(kwrepo, self).commitctx(ctx, error)
572 n = super(kwrepo, self).commitctx(ctx, error)
573 # no lock needed, only called from repo.commit() which already locks
573 # no lock needed, only called from repo.commit() which already locks
574 if not kwt.record:
574 if not kwt.record:
575 restrict = kwt.restrict
575 restrict = kwt.restrict
576 kwt.restrict = True
576 kwt.restrict = True
577 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
577 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
578 False, True)
578 False, True)
579 kwt.restrict = restrict
579 kwt.restrict = restrict
580 return n
580 return n
581
581
582 def rollback(self, dryrun=False):
582 def rollback(self, dryrun=False):
583 wlock = self.wlock()
583 wlock = self.wlock()
584 try:
584 try:
585 if not dryrun:
585 if not dryrun:
586 changed = self['.'].files()
586 changed = self['.'].files()
587 ret = super(kwrepo, self).rollback(dryrun)
587 ret = super(kwrepo, self).rollback(dryrun)
588 if not dryrun:
588 if not dryrun:
589 ctx = self['.']
589 ctx = self['.']
590 modified, added = _preselect(self[None].status(), changed)
590 modified, added = _preselect(self[None].status(), changed)
591 kwt.overwrite(ctx, modified, True, True)
591 kwt.overwrite(ctx, modified, True, True)
592 kwt.overwrite(ctx, added, True, False)
592 kwt.overwrite(ctx, added, True, False)
593 return ret
593 return ret
594 finally:
594 finally:
595 wlock.release()
595 wlock.release()
596
596
597 # monkeypatches
597 # monkeypatches
598 def kwpatchfile_init(orig, self, ui, fname, backend, store, mode, create,
598 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
599 remove, eolmode=None, copysource=None):
600 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
599 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
601 rejects or conflicts due to expanded keywords in working dir.'''
600 rejects or conflicts due to expanded keywords in working dir.'''
602 orig(self, ui, fname, backend, store, mode, create, remove,
601 orig(self, ui, gp, backend, store, eolmode)
603 eolmode, copysource)
604 # shrink keywords read from working dir
602 # shrink keywords read from working dir
605 self.lines = kwt.shrinklines(self.fname, self.lines)
603 self.lines = kwt.shrinklines(self.fname, self.lines)
606
604
607 def kw_diff(orig, repo, node1=None, node2=None, match=None, changes=None,
605 def kw_diff(orig, repo, node1=None, node2=None, match=None, changes=None,
608 opts=None, prefix=''):
606 opts=None, prefix=''):
609 '''Monkeypatch patch.diff to avoid expansion.'''
607 '''Monkeypatch patch.diff to avoid expansion.'''
610 kwt.restrict = True
608 kwt.restrict = True
611 return orig(repo, node1, node2, match, changes, opts, prefix)
609 return orig(repo, node1, node2, match, changes, opts, prefix)
612
610
613 def kwweb_skip(orig, web, req, tmpl):
611 def kwweb_skip(orig, web, req, tmpl):
614 '''Wraps webcommands.x turning off keyword expansion.'''
612 '''Wraps webcommands.x turning off keyword expansion.'''
615 kwt.match = util.never
613 kwt.match = util.never
616 return orig(web, req, tmpl)
614 return orig(web, req, tmpl)
617
615
618 def kw_copy(orig, ui, repo, pats, opts, rename=False):
616 def kw_copy(orig, ui, repo, pats, opts, rename=False):
619 '''Wraps cmdutil.copy so that copy/rename destinations do not
617 '''Wraps cmdutil.copy so that copy/rename destinations do not
620 contain expanded keywords.
618 contain expanded keywords.
621 Note that the source of a regular file destination may also be a
619 Note that the source of a regular file destination may also be a
622 symlink:
620 symlink:
623 hg cp sym x -> x is symlink
621 hg cp sym x -> x is symlink
624 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
622 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
625 For the latter we have to follow the symlink to find out whether its
623 For the latter we have to follow the symlink to find out whether its
626 target is configured for expansion and we therefore must unexpand the
624 target is configured for expansion and we therefore must unexpand the
627 keywords in the destination.'''
625 keywords in the destination.'''
628 orig(ui, repo, pats, opts, rename)
626 orig(ui, repo, pats, opts, rename)
629 if opts.get('dry_run'):
627 if opts.get('dry_run'):
630 return
628 return
631 wctx = repo[None]
629 wctx = repo[None]
632 cwd = repo.getcwd()
630 cwd = repo.getcwd()
633
631
634 def haskwsource(dest):
632 def haskwsource(dest):
635 '''Returns true if dest is a regular file and configured for
633 '''Returns true if dest is a regular file and configured for
636 expansion or a symlink which points to a file configured for
634 expansion or a symlink which points to a file configured for
637 expansion. '''
635 expansion. '''
638 source = repo.dirstate.copied(dest)
636 source = repo.dirstate.copied(dest)
639 if 'l' in wctx.flags(source):
637 if 'l' in wctx.flags(source):
640 source = scmutil.canonpath(repo.root, cwd,
638 source = scmutil.canonpath(repo.root, cwd,
641 os.path.realpath(source))
639 os.path.realpath(source))
642 return kwt.match(source)
640 return kwt.match(source)
643
641
644 candidates = [f for f in repo.dirstate.copies() if
642 candidates = [f for f in repo.dirstate.copies() if
645 not 'l' in wctx.flags(f) and haskwsource(f)]
643 not 'l' in wctx.flags(f) and haskwsource(f)]
646 kwt.overwrite(wctx, candidates, False, False)
644 kwt.overwrite(wctx, candidates, False, False)
647
645
648 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
646 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
649 '''Wraps record.dorecord expanding keywords after recording.'''
647 '''Wraps record.dorecord expanding keywords after recording.'''
650 wlock = repo.wlock()
648 wlock = repo.wlock()
651 try:
649 try:
652 # record returns 0 even when nothing has changed
650 # record returns 0 even when nothing has changed
653 # therefore compare nodes before and after
651 # therefore compare nodes before and after
654 kwt.record = True
652 kwt.record = True
655 ctx = repo['.']
653 ctx = repo['.']
656 wstatus = repo[None].status()
654 wstatus = repo[None].status()
657 ret = orig(ui, repo, commitfunc, *pats, **opts)
655 ret = orig(ui, repo, commitfunc, *pats, **opts)
658 recctx = repo['.']
656 recctx = repo['.']
659 if ctx != recctx:
657 if ctx != recctx:
660 modified, added = _preselect(wstatus, recctx.files())
658 modified, added = _preselect(wstatus, recctx.files())
661 kwt.restrict = False
659 kwt.restrict = False
662 kwt.overwrite(recctx, modified, False, True)
660 kwt.overwrite(recctx, modified, False, True)
663 kwt.overwrite(recctx, added, False, True, True)
661 kwt.overwrite(recctx, added, False, True, True)
664 kwt.restrict = True
662 kwt.restrict = True
665 return ret
663 return ret
666 finally:
664 finally:
667 wlock.release()
665 wlock.release()
668
666
669 def kwfilectx_cmp(orig, self, fctx):
667 def kwfilectx_cmp(orig, self, fctx):
670 # keyword affects data size, comparing wdir and filelog size does
668 # keyword affects data size, comparing wdir and filelog size does
671 # not make sense
669 # not make sense
672 if (fctx._filerev is None and
670 if (fctx._filerev is None and
673 (self._repo._encodefilterpats or
671 (self._repo._encodefilterpats or
674 kwt.match(fctx.path()) and not 'l' in fctx.flags()) or
672 kwt.match(fctx.path()) and not 'l' in fctx.flags()) or
675 self.size() == fctx.size()):
673 self.size() == fctx.size()):
676 return self._filelog.cmp(self._filenode, fctx.data())
674 return self._filelog.cmp(self._filenode, fctx.data())
677 return True
675 return True
678
676
679 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
677 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
680 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
678 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
681 extensions.wrapfunction(patch, 'diff', kw_diff)
679 extensions.wrapfunction(patch, 'diff', kw_diff)
682 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
680 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
683 for c in 'annotate changeset rev filediff diff'.split():
681 for c in 'annotate changeset rev filediff diff'.split():
684 extensions.wrapfunction(webcommands, c, kwweb_skip)
682 extensions.wrapfunction(webcommands, c, kwweb_skip)
685 for name in recordextensions.split():
683 for name in recordextensions.split():
686 try:
684 try:
687 record = extensions.find(name)
685 record = extensions.find(name)
688 extensions.wrapfunction(record, 'dorecord', kw_dorecord)
686 extensions.wrapfunction(record, 'dorecord', kw_dorecord)
689 except KeyError:
687 except KeyError:
690 pass
688 pass
691
689
692 repo.__class__ = kwrepo
690 repo.__class__ = kwrepo
@@ -1,1781 +1,1783 b''
1 # patch.py - patch file parsing routines
1 # patch.py - patch file parsing routines
2 #
2 #
3 # Copyright 2006 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006 Brendan Cully <brendan@kublai.com>
4 # Copyright 2007 Chris Mason <chris.mason@oracle.com>
4 # Copyright 2007 Chris Mason <chris.mason@oracle.com>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 import cStringIO, email.Parser, os, errno, re
9 import cStringIO, email.Parser, os, errno, re
10 import tempfile, zlib, shutil
10 import tempfile, zlib, shutil
11
11
12 from i18n import _
12 from i18n import _
13 from node import hex, nullid, short
13 from node import hex, nullid, short
14 import base85, mdiff, scmutil, util, diffhelpers, copies, encoding
14 import base85, mdiff, scmutil, util, diffhelpers, copies, encoding
15
15
16 gitre = re.compile('diff --git a/(.*) b/(.*)')
16 gitre = re.compile('diff --git a/(.*) b/(.*)')
17
17
18 class PatchError(Exception):
18 class PatchError(Exception):
19 pass
19 pass
20
20
21
21
22 # public functions
22 # public functions
23
23
24 def split(stream):
24 def split(stream):
25 '''return an iterator of individual patches from a stream'''
25 '''return an iterator of individual patches from a stream'''
26 def isheader(line, inheader):
26 def isheader(line, inheader):
27 if inheader and line[0] in (' ', '\t'):
27 if inheader and line[0] in (' ', '\t'):
28 # continuation
28 # continuation
29 return True
29 return True
30 if line[0] in (' ', '-', '+'):
30 if line[0] in (' ', '-', '+'):
31 # diff line - don't check for header pattern in there
31 # diff line - don't check for header pattern in there
32 return False
32 return False
33 l = line.split(': ', 1)
33 l = line.split(': ', 1)
34 return len(l) == 2 and ' ' not in l[0]
34 return len(l) == 2 and ' ' not in l[0]
35
35
36 def chunk(lines):
36 def chunk(lines):
37 return cStringIO.StringIO(''.join(lines))
37 return cStringIO.StringIO(''.join(lines))
38
38
39 def hgsplit(stream, cur):
39 def hgsplit(stream, cur):
40 inheader = True
40 inheader = True
41
41
42 for line in stream:
42 for line in stream:
43 if not line.strip():
43 if not line.strip():
44 inheader = False
44 inheader = False
45 if not inheader and line.startswith('# HG changeset patch'):
45 if not inheader and line.startswith('# HG changeset patch'):
46 yield chunk(cur)
46 yield chunk(cur)
47 cur = []
47 cur = []
48 inheader = True
48 inheader = True
49
49
50 cur.append(line)
50 cur.append(line)
51
51
52 if cur:
52 if cur:
53 yield chunk(cur)
53 yield chunk(cur)
54
54
55 def mboxsplit(stream, cur):
55 def mboxsplit(stream, cur):
56 for line in stream:
56 for line in stream:
57 if line.startswith('From '):
57 if line.startswith('From '):
58 for c in split(chunk(cur[1:])):
58 for c in split(chunk(cur[1:])):
59 yield c
59 yield c
60 cur = []
60 cur = []
61
61
62 cur.append(line)
62 cur.append(line)
63
63
64 if cur:
64 if cur:
65 for c in split(chunk(cur[1:])):
65 for c in split(chunk(cur[1:])):
66 yield c
66 yield c
67
67
68 def mimesplit(stream, cur):
68 def mimesplit(stream, cur):
69 def msgfp(m):
69 def msgfp(m):
70 fp = cStringIO.StringIO()
70 fp = cStringIO.StringIO()
71 g = email.Generator.Generator(fp, mangle_from_=False)
71 g = email.Generator.Generator(fp, mangle_from_=False)
72 g.flatten(m)
72 g.flatten(m)
73 fp.seek(0)
73 fp.seek(0)
74 return fp
74 return fp
75
75
76 for line in stream:
76 for line in stream:
77 cur.append(line)
77 cur.append(line)
78 c = chunk(cur)
78 c = chunk(cur)
79
79
80 m = email.Parser.Parser().parse(c)
80 m = email.Parser.Parser().parse(c)
81 if not m.is_multipart():
81 if not m.is_multipart():
82 yield msgfp(m)
82 yield msgfp(m)
83 else:
83 else:
84 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
84 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
85 for part in m.walk():
85 for part in m.walk():
86 ct = part.get_content_type()
86 ct = part.get_content_type()
87 if ct not in ok_types:
87 if ct not in ok_types:
88 continue
88 continue
89 yield msgfp(part)
89 yield msgfp(part)
90
90
91 def headersplit(stream, cur):
91 def headersplit(stream, cur):
92 inheader = False
92 inheader = False
93
93
94 for line in stream:
94 for line in stream:
95 if not inheader and isheader(line, inheader):
95 if not inheader and isheader(line, inheader):
96 yield chunk(cur)
96 yield chunk(cur)
97 cur = []
97 cur = []
98 inheader = True
98 inheader = True
99 if inheader and not isheader(line, inheader):
99 if inheader and not isheader(line, inheader):
100 inheader = False
100 inheader = False
101
101
102 cur.append(line)
102 cur.append(line)
103
103
104 if cur:
104 if cur:
105 yield chunk(cur)
105 yield chunk(cur)
106
106
107 def remainder(cur):
107 def remainder(cur):
108 yield chunk(cur)
108 yield chunk(cur)
109
109
110 class fiter(object):
110 class fiter(object):
111 def __init__(self, fp):
111 def __init__(self, fp):
112 self.fp = fp
112 self.fp = fp
113
113
114 def __iter__(self):
114 def __iter__(self):
115 return self
115 return self
116
116
117 def next(self):
117 def next(self):
118 l = self.fp.readline()
118 l = self.fp.readline()
119 if not l:
119 if not l:
120 raise StopIteration
120 raise StopIteration
121 return l
121 return l
122
122
123 inheader = False
123 inheader = False
124 cur = []
124 cur = []
125
125
126 mimeheaders = ['content-type']
126 mimeheaders = ['content-type']
127
127
128 if not hasattr(stream, 'next'):
128 if not hasattr(stream, 'next'):
129 # http responses, for example, have readline but not next
129 # http responses, for example, have readline but not next
130 stream = fiter(stream)
130 stream = fiter(stream)
131
131
132 for line in stream:
132 for line in stream:
133 cur.append(line)
133 cur.append(line)
134 if line.startswith('# HG changeset patch'):
134 if line.startswith('# HG changeset patch'):
135 return hgsplit(stream, cur)
135 return hgsplit(stream, cur)
136 elif line.startswith('From '):
136 elif line.startswith('From '):
137 return mboxsplit(stream, cur)
137 return mboxsplit(stream, cur)
138 elif isheader(line, inheader):
138 elif isheader(line, inheader):
139 inheader = True
139 inheader = True
140 if line.split(':', 1)[0].lower() in mimeheaders:
140 if line.split(':', 1)[0].lower() in mimeheaders:
141 # let email parser handle this
141 # let email parser handle this
142 return mimesplit(stream, cur)
142 return mimesplit(stream, cur)
143 elif line.startswith('--- ') and inheader:
143 elif line.startswith('--- ') and inheader:
144 # No evil headers seen by diff start, split by hand
144 # No evil headers seen by diff start, split by hand
145 return headersplit(stream, cur)
145 return headersplit(stream, cur)
146 # Not enough info, keep reading
146 # Not enough info, keep reading
147
147
148 # if we are here, we have a very plain patch
148 # if we are here, we have a very plain patch
149 return remainder(cur)
149 return remainder(cur)
150
150
151 def extract(ui, fileobj):
151 def extract(ui, fileobj):
152 '''extract patch from data read from fileobj.
152 '''extract patch from data read from fileobj.
153
153
154 patch can be a normal patch or contained in an email message.
154 patch can be a normal patch or contained in an email message.
155
155
156 return tuple (filename, message, user, date, branch, node, p1, p2).
156 return tuple (filename, message, user, date, branch, node, p1, p2).
157 Any item in the returned tuple can be None. If filename is None,
157 Any item in the returned tuple can be None. If filename is None,
158 fileobj did not contain a patch. Caller must unlink filename when done.'''
158 fileobj did not contain a patch. Caller must unlink filename when done.'''
159
159
160 # attempt to detect the start of a patch
160 # attempt to detect the start of a patch
161 # (this heuristic is borrowed from quilt)
161 # (this heuristic is borrowed from quilt)
162 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |'
162 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |'
163 r'retrieving revision [0-9]+(\.[0-9]+)*$|'
163 r'retrieving revision [0-9]+(\.[0-9]+)*$|'
164 r'---[ \t].*?^\+\+\+[ \t]|'
164 r'---[ \t].*?^\+\+\+[ \t]|'
165 r'\*\*\*[ \t].*?^---[ \t])', re.MULTILINE|re.DOTALL)
165 r'\*\*\*[ \t].*?^---[ \t])', re.MULTILINE|re.DOTALL)
166
166
167 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
167 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
168 tmpfp = os.fdopen(fd, 'w')
168 tmpfp = os.fdopen(fd, 'w')
169 try:
169 try:
170 msg = email.Parser.Parser().parse(fileobj)
170 msg = email.Parser.Parser().parse(fileobj)
171
171
172 subject = msg['Subject']
172 subject = msg['Subject']
173 user = msg['From']
173 user = msg['From']
174 if not subject and not user:
174 if not subject and not user:
175 # Not an email, restore parsed headers if any
175 # Not an email, restore parsed headers if any
176 subject = '\n'.join(': '.join(h) for h in msg.items()) + '\n'
176 subject = '\n'.join(': '.join(h) for h in msg.items()) + '\n'
177
177
178 gitsendmail = 'git-send-email' in msg.get('X-Mailer', '')
178 gitsendmail = 'git-send-email' in msg.get('X-Mailer', '')
179 # should try to parse msg['Date']
179 # should try to parse msg['Date']
180 date = None
180 date = None
181 nodeid = None
181 nodeid = None
182 branch = None
182 branch = None
183 parents = []
183 parents = []
184
184
185 if subject:
185 if subject:
186 if subject.startswith('[PATCH'):
186 if subject.startswith('[PATCH'):
187 pend = subject.find(']')
187 pend = subject.find(']')
188 if pend >= 0:
188 if pend >= 0:
189 subject = subject[pend + 1:].lstrip()
189 subject = subject[pend + 1:].lstrip()
190 subject = subject.replace('\n\t', ' ')
190 subject = subject.replace('\n\t', ' ')
191 ui.debug('Subject: %s\n' % subject)
191 ui.debug('Subject: %s\n' % subject)
192 if user:
192 if user:
193 ui.debug('From: %s\n' % user)
193 ui.debug('From: %s\n' % user)
194 diffs_seen = 0
194 diffs_seen = 0
195 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
195 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
196 message = ''
196 message = ''
197 for part in msg.walk():
197 for part in msg.walk():
198 content_type = part.get_content_type()
198 content_type = part.get_content_type()
199 ui.debug('Content-Type: %s\n' % content_type)
199 ui.debug('Content-Type: %s\n' % content_type)
200 if content_type not in ok_types:
200 if content_type not in ok_types:
201 continue
201 continue
202 payload = part.get_payload(decode=True)
202 payload = part.get_payload(decode=True)
203 m = diffre.search(payload)
203 m = diffre.search(payload)
204 if m:
204 if m:
205 hgpatch = False
205 hgpatch = False
206 hgpatchheader = False
206 hgpatchheader = False
207 ignoretext = False
207 ignoretext = False
208
208
209 ui.debug('found patch at byte %d\n' % m.start(0))
209 ui.debug('found patch at byte %d\n' % m.start(0))
210 diffs_seen += 1
210 diffs_seen += 1
211 cfp = cStringIO.StringIO()
211 cfp = cStringIO.StringIO()
212 for line in payload[:m.start(0)].splitlines():
212 for line in payload[:m.start(0)].splitlines():
213 if line.startswith('# HG changeset patch') and not hgpatch:
213 if line.startswith('# HG changeset patch') and not hgpatch:
214 ui.debug('patch generated by hg export\n')
214 ui.debug('patch generated by hg export\n')
215 hgpatch = True
215 hgpatch = True
216 hgpatchheader = True
216 hgpatchheader = True
217 # drop earlier commit message content
217 # drop earlier commit message content
218 cfp.seek(0)
218 cfp.seek(0)
219 cfp.truncate()
219 cfp.truncate()
220 subject = None
220 subject = None
221 elif hgpatchheader:
221 elif hgpatchheader:
222 if line.startswith('# User '):
222 if line.startswith('# User '):
223 user = line[7:]
223 user = line[7:]
224 ui.debug('From: %s\n' % user)
224 ui.debug('From: %s\n' % user)
225 elif line.startswith("# Date "):
225 elif line.startswith("# Date "):
226 date = line[7:]
226 date = line[7:]
227 elif line.startswith("# Branch "):
227 elif line.startswith("# Branch "):
228 branch = line[9:]
228 branch = line[9:]
229 elif line.startswith("# Node ID "):
229 elif line.startswith("# Node ID "):
230 nodeid = line[10:]
230 nodeid = line[10:]
231 elif line.startswith("# Parent "):
231 elif line.startswith("# Parent "):
232 parents.append(line[10:])
232 parents.append(line[10:])
233 elif not line.startswith("# "):
233 elif not line.startswith("# "):
234 hgpatchheader = False
234 hgpatchheader = False
235 elif line == '---' and gitsendmail:
235 elif line == '---' and gitsendmail:
236 ignoretext = True
236 ignoretext = True
237 if not hgpatchheader and not ignoretext:
237 if not hgpatchheader and not ignoretext:
238 cfp.write(line)
238 cfp.write(line)
239 cfp.write('\n')
239 cfp.write('\n')
240 message = cfp.getvalue()
240 message = cfp.getvalue()
241 if tmpfp:
241 if tmpfp:
242 tmpfp.write(payload)
242 tmpfp.write(payload)
243 if not payload.endswith('\n'):
243 if not payload.endswith('\n'):
244 tmpfp.write('\n')
244 tmpfp.write('\n')
245 elif not diffs_seen and message and content_type == 'text/plain':
245 elif not diffs_seen and message and content_type == 'text/plain':
246 message += '\n' + payload
246 message += '\n' + payload
247 except:
247 except:
248 tmpfp.close()
248 tmpfp.close()
249 os.unlink(tmpname)
249 os.unlink(tmpname)
250 raise
250 raise
251
251
252 if subject and not message.startswith(subject):
252 if subject and not message.startswith(subject):
253 message = '%s\n%s' % (subject, message)
253 message = '%s\n%s' % (subject, message)
254 tmpfp.close()
254 tmpfp.close()
255 if not diffs_seen:
255 if not diffs_seen:
256 os.unlink(tmpname)
256 os.unlink(tmpname)
257 return None, message, user, date, branch, None, None, None
257 return None, message, user, date, branch, None, None, None
258 p1 = parents and parents.pop(0) or None
258 p1 = parents and parents.pop(0) or None
259 p2 = parents and parents.pop(0) or None
259 p2 = parents and parents.pop(0) or None
260 return tmpname, message, user, date, branch, nodeid, p1, p2
260 return tmpname, message, user, date, branch, nodeid, p1, p2
261
261
262 class patchmeta(object):
262 class patchmeta(object):
263 """Patched file metadata
263 """Patched file metadata
264
264
265 'op' is the performed operation within ADD, DELETE, RENAME, MODIFY
265 'op' is the performed operation within ADD, DELETE, RENAME, MODIFY
266 or COPY. 'path' is patched file path. 'oldpath' is set to the
266 or COPY. 'path' is patched file path. 'oldpath' is set to the
267 origin file when 'op' is either COPY or RENAME, None otherwise. If
267 origin file when 'op' is either COPY or RENAME, None otherwise. If
268 file mode is changed, 'mode' is a tuple (islink, isexec) where
268 file mode is changed, 'mode' is a tuple (islink, isexec) where
269 'islink' is True if the file is a symlink and 'isexec' is True if
269 'islink' is True if the file is a symlink and 'isexec' is True if
270 the file is executable. Otherwise, 'mode' is None.
270 the file is executable. Otherwise, 'mode' is None.
271 """
271 """
272 def __init__(self, path):
272 def __init__(self, path):
273 self.path = path
273 self.path = path
274 self.oldpath = None
274 self.oldpath = None
275 self.mode = None
275 self.mode = None
276 self.op = 'MODIFY'
276 self.op = 'MODIFY'
277 self.binary = False
277 self.binary = False
278
278
279 def setmode(self, mode):
279 def setmode(self, mode):
280 islink = mode & 020000
280 islink = mode & 020000
281 isexec = mode & 0100
281 isexec = mode & 0100
282 self.mode = (islink, isexec)
282 self.mode = (islink, isexec)
283
283
284 def copy(self):
285 other = patchmeta(self.path)
286 other.oldpath = self.oldpath
287 other.mode = self.mode
288 other.op = self.op
289 other.binary = self.binary
290 return other
291
284 def __repr__(self):
292 def __repr__(self):
285 return "<patchmeta %s %r>" % (self.op, self.path)
293 return "<patchmeta %s %r>" % (self.op, self.path)
286
294
287 def readgitpatch(lr):
295 def readgitpatch(lr):
288 """extract git-style metadata about patches from <patchname>"""
296 """extract git-style metadata about patches from <patchname>"""
289
297
290 # Filter patch for git information
298 # Filter patch for git information
291 gp = None
299 gp = None
292 gitpatches = []
300 gitpatches = []
293 for line in lr:
301 for line in lr:
294 line = line.rstrip(' \r\n')
302 line = line.rstrip(' \r\n')
295 if line.startswith('diff --git'):
303 if line.startswith('diff --git'):
296 m = gitre.match(line)
304 m = gitre.match(line)
297 if m:
305 if m:
298 if gp:
306 if gp:
299 gitpatches.append(gp)
307 gitpatches.append(gp)
300 dst = m.group(2)
308 dst = m.group(2)
301 gp = patchmeta(dst)
309 gp = patchmeta(dst)
302 elif gp:
310 elif gp:
303 if line.startswith('--- '):
311 if line.startswith('--- '):
304 gitpatches.append(gp)
312 gitpatches.append(gp)
305 gp = None
313 gp = None
306 continue
314 continue
307 if line.startswith('rename from '):
315 if line.startswith('rename from '):
308 gp.op = 'RENAME'
316 gp.op = 'RENAME'
309 gp.oldpath = line[12:]
317 gp.oldpath = line[12:]
310 elif line.startswith('rename to '):
318 elif line.startswith('rename to '):
311 gp.path = line[10:]
319 gp.path = line[10:]
312 elif line.startswith('copy from '):
320 elif line.startswith('copy from '):
313 gp.op = 'COPY'
321 gp.op = 'COPY'
314 gp.oldpath = line[10:]
322 gp.oldpath = line[10:]
315 elif line.startswith('copy to '):
323 elif line.startswith('copy to '):
316 gp.path = line[8:]
324 gp.path = line[8:]
317 elif line.startswith('deleted file'):
325 elif line.startswith('deleted file'):
318 gp.op = 'DELETE'
326 gp.op = 'DELETE'
319 elif line.startswith('new file mode '):
327 elif line.startswith('new file mode '):
320 gp.op = 'ADD'
328 gp.op = 'ADD'
321 gp.setmode(int(line[-6:], 8))
329 gp.setmode(int(line[-6:], 8))
322 elif line.startswith('new mode '):
330 elif line.startswith('new mode '):
323 gp.setmode(int(line[-6:], 8))
331 gp.setmode(int(line[-6:], 8))
324 elif line.startswith('GIT binary patch'):
332 elif line.startswith('GIT binary patch'):
325 gp.binary = True
333 gp.binary = True
326 if gp:
334 if gp:
327 gitpatches.append(gp)
335 gitpatches.append(gp)
328
336
329 return gitpatches
337 return gitpatches
330
338
331 class linereader(object):
339 class linereader(object):
332 # simple class to allow pushing lines back into the input stream
340 # simple class to allow pushing lines back into the input stream
333 def __init__(self, fp):
341 def __init__(self, fp):
334 self.fp = fp
342 self.fp = fp
335 self.buf = []
343 self.buf = []
336
344
337 def push(self, line):
345 def push(self, line):
338 if line is not None:
346 if line is not None:
339 self.buf.append(line)
347 self.buf.append(line)
340
348
341 def readline(self):
349 def readline(self):
342 if self.buf:
350 if self.buf:
343 l = self.buf[0]
351 l = self.buf[0]
344 del self.buf[0]
352 del self.buf[0]
345 return l
353 return l
346 return self.fp.readline()
354 return self.fp.readline()
347
355
348 def __iter__(self):
356 def __iter__(self):
349 while True:
357 while True:
350 l = self.readline()
358 l = self.readline()
351 if not l:
359 if not l:
352 break
360 break
353 yield l
361 yield l
354
362
355 class abstractbackend(object):
363 class abstractbackend(object):
356 def __init__(self, ui):
364 def __init__(self, ui):
357 self.ui = ui
365 self.ui = ui
358
366
359 def getfile(self, fname):
367 def getfile(self, fname):
360 """Return target file data and flags as a (data, (islink,
368 """Return target file data and flags as a (data, (islink,
361 isexec)) tuple.
369 isexec)) tuple.
362 """
370 """
363 raise NotImplementedError
371 raise NotImplementedError
364
372
365 def setfile(self, fname, data, mode, copysource):
373 def setfile(self, fname, data, mode, copysource):
366 """Write data to target file fname and set its mode. mode is a
374 """Write data to target file fname and set its mode. mode is a
367 (islink, isexec) tuple. If data is None, the file content should
375 (islink, isexec) tuple. If data is None, the file content should
368 be left unchanged. If the file is modified after being copied,
376 be left unchanged. If the file is modified after being copied,
369 copysource is set to the original file name.
377 copysource is set to the original file name.
370 """
378 """
371 raise NotImplementedError
379 raise NotImplementedError
372
380
373 def unlink(self, fname):
381 def unlink(self, fname):
374 """Unlink target file."""
382 """Unlink target file."""
375 raise NotImplementedError
383 raise NotImplementedError
376
384
377 def writerej(self, fname, failed, total, lines):
385 def writerej(self, fname, failed, total, lines):
378 """Write rejected lines for fname. total is the number of hunks
386 """Write rejected lines for fname. total is the number of hunks
379 which failed to apply and total the total number of hunks for this
387 which failed to apply and total the total number of hunks for this
380 files.
388 files.
381 """
389 """
382 pass
390 pass
383
391
384 def exists(self, fname):
392 def exists(self, fname):
385 raise NotImplementedError
393 raise NotImplementedError
386
394
387 class fsbackend(abstractbackend):
395 class fsbackend(abstractbackend):
388 def __init__(self, ui, basedir):
396 def __init__(self, ui, basedir):
389 super(fsbackend, self).__init__(ui)
397 super(fsbackend, self).__init__(ui)
390 self.opener = scmutil.opener(basedir)
398 self.opener = scmutil.opener(basedir)
391
399
392 def _join(self, f):
400 def _join(self, f):
393 return os.path.join(self.opener.base, f)
401 return os.path.join(self.opener.base, f)
394
402
395 def getfile(self, fname):
403 def getfile(self, fname):
396 path = self._join(fname)
404 path = self._join(fname)
397 if os.path.islink(path):
405 if os.path.islink(path):
398 return (os.readlink(path), (True, False))
406 return (os.readlink(path), (True, False))
399 isexec = False
407 isexec = False
400 try:
408 try:
401 isexec = os.lstat(path).st_mode & 0100 != 0
409 isexec = os.lstat(path).st_mode & 0100 != 0
402 except OSError, e:
410 except OSError, e:
403 if e.errno != errno.ENOENT:
411 if e.errno != errno.ENOENT:
404 raise
412 raise
405 return (self.opener.read(fname), (False, isexec))
413 return (self.opener.read(fname), (False, isexec))
406
414
407 def setfile(self, fname, data, mode, copysource):
415 def setfile(self, fname, data, mode, copysource):
408 islink, isexec = mode
416 islink, isexec = mode
409 if data is None:
417 if data is None:
410 util.setflags(self._join(fname), islink, isexec)
418 util.setflags(self._join(fname), islink, isexec)
411 return
419 return
412 if islink:
420 if islink:
413 self.opener.symlink(data, fname)
421 self.opener.symlink(data, fname)
414 else:
422 else:
415 self.opener.write(fname, data)
423 self.opener.write(fname, data)
416 if isexec:
424 if isexec:
417 util.setflags(self._join(fname), False, True)
425 util.setflags(self._join(fname), False, True)
418
426
419 def unlink(self, fname):
427 def unlink(self, fname):
420 try:
428 try:
421 util.unlinkpath(self._join(fname))
429 util.unlinkpath(self._join(fname))
422 except OSError, inst:
430 except OSError, inst:
423 if inst.errno != errno.ENOENT:
431 if inst.errno != errno.ENOENT:
424 raise
432 raise
425
433
426 def writerej(self, fname, failed, total, lines):
434 def writerej(self, fname, failed, total, lines):
427 fname = fname + ".rej"
435 fname = fname + ".rej"
428 self.ui.warn(
436 self.ui.warn(
429 _("%d out of %d hunks FAILED -- saving rejects to file %s\n") %
437 _("%d out of %d hunks FAILED -- saving rejects to file %s\n") %
430 (failed, total, fname))
438 (failed, total, fname))
431 fp = self.opener(fname, 'w')
439 fp = self.opener(fname, 'w')
432 fp.writelines(lines)
440 fp.writelines(lines)
433 fp.close()
441 fp.close()
434
442
435 def exists(self, fname):
443 def exists(self, fname):
436 return os.path.lexists(self._join(fname))
444 return os.path.lexists(self._join(fname))
437
445
438 class workingbackend(fsbackend):
446 class workingbackend(fsbackend):
439 def __init__(self, ui, repo, similarity):
447 def __init__(self, ui, repo, similarity):
440 super(workingbackend, self).__init__(ui, repo.root)
448 super(workingbackend, self).__init__(ui, repo.root)
441 self.repo = repo
449 self.repo = repo
442 self.similarity = similarity
450 self.similarity = similarity
443 self.removed = set()
451 self.removed = set()
444 self.changed = set()
452 self.changed = set()
445 self.copied = []
453 self.copied = []
446
454
447 def _checkknown(self, fname):
455 def _checkknown(self, fname):
448 if self.repo.dirstate[fname] == '?' and self.exists(fname):
456 if self.repo.dirstate[fname] == '?' and self.exists(fname):
449 raise PatchError(_('cannot patch %s: file is not tracked') % fname)
457 raise PatchError(_('cannot patch %s: file is not tracked') % fname)
450
458
451 def setfile(self, fname, data, mode, copysource):
459 def setfile(self, fname, data, mode, copysource):
452 self._checkknown(fname)
460 self._checkknown(fname)
453 super(workingbackend, self).setfile(fname, data, mode, copysource)
461 super(workingbackend, self).setfile(fname, data, mode, copysource)
454 if copysource is not None:
462 if copysource is not None:
455 self.copied.append((copysource, fname))
463 self.copied.append((copysource, fname))
456 self.changed.add(fname)
464 self.changed.add(fname)
457
465
458 def unlink(self, fname):
466 def unlink(self, fname):
459 self._checkknown(fname)
467 self._checkknown(fname)
460 super(workingbackend, self).unlink(fname)
468 super(workingbackend, self).unlink(fname)
461 self.removed.add(fname)
469 self.removed.add(fname)
462 self.changed.add(fname)
470 self.changed.add(fname)
463
471
464 def close(self):
472 def close(self):
465 wctx = self.repo[None]
473 wctx = self.repo[None]
466 addremoved = set(self.changed)
474 addremoved = set(self.changed)
467 for src, dst in self.copied:
475 for src, dst in self.copied:
468 scmutil.dirstatecopy(self.ui, self.repo, wctx, src, dst)
476 scmutil.dirstatecopy(self.ui, self.repo, wctx, src, dst)
469 addremoved.discard(src)
477 addremoved.discard(src)
470 if (not self.similarity) and self.removed:
478 if (not self.similarity) and self.removed:
471 wctx.forget(sorted(self.removed))
479 wctx.forget(sorted(self.removed))
472 if addremoved:
480 if addremoved:
473 cwd = self.repo.getcwd()
481 cwd = self.repo.getcwd()
474 if cwd:
482 if cwd:
475 addremoved = [util.pathto(self.repo.root, cwd, f)
483 addremoved = [util.pathto(self.repo.root, cwd, f)
476 for f in addremoved]
484 for f in addremoved]
477 scmutil.addremove(self.repo, addremoved, similarity=self.similarity)
485 scmutil.addremove(self.repo, addremoved, similarity=self.similarity)
478 return sorted(self.changed)
486 return sorted(self.changed)
479
487
480 class filestore(object):
488 class filestore(object):
481 def __init__(self):
489 def __init__(self):
482 self.opener = None
490 self.opener = None
483 self.files = {}
491 self.files = {}
484 self.created = 0
492 self.created = 0
485
493
486 def setfile(self, fname, data, mode):
494 def setfile(self, fname, data, mode):
487 if self.opener is None:
495 if self.opener is None:
488 root = tempfile.mkdtemp(prefix='hg-patch-')
496 root = tempfile.mkdtemp(prefix='hg-patch-')
489 self.opener = scmutil.opener(root)
497 self.opener = scmutil.opener(root)
490 # Avoid filename issues with these simple names
498 # Avoid filename issues with these simple names
491 fn = str(self.created)
499 fn = str(self.created)
492 self.opener.write(fn, data)
500 self.opener.write(fn, data)
493 self.created += 1
501 self.created += 1
494 self.files[fname] = (fn, mode)
502 self.files[fname] = (fn, mode)
495
503
496 def getfile(self, fname):
504 def getfile(self, fname):
497 if fname not in self.files:
505 if fname not in self.files:
498 raise IOError()
506 raise IOError()
499 fn, mode = self.files[fname]
507 fn, mode = self.files[fname]
500 return self.opener.read(fn), mode
508 return self.opener.read(fn), mode
501
509
502 def close(self):
510 def close(self):
503 if self.opener:
511 if self.opener:
504 shutil.rmtree(self.opener.base)
512 shutil.rmtree(self.opener.base)
505
513
506 # @@ -start,len +start,len @@ or @@ -start +start @@ if len is 1
514 # @@ -start,len +start,len @@ or @@ -start +start @@ if len is 1
507 unidesc = re.compile('@@ -(\d+)(,(\d+))? \+(\d+)(,(\d+))? @@')
515 unidesc = re.compile('@@ -(\d+)(,(\d+))? \+(\d+)(,(\d+))? @@')
508 contextdesc = re.compile('(---|\*\*\*) (\d+)(,(\d+))? (---|\*\*\*)')
516 contextdesc = re.compile('(---|\*\*\*) (\d+)(,(\d+))? (---|\*\*\*)')
509 eolmodes = ['strict', 'crlf', 'lf', 'auto']
517 eolmodes = ['strict', 'crlf', 'lf', 'auto']
510
518
511 class patchfile(object):
519 class patchfile(object):
512 def __init__(self, ui, fname, backend, store, mode, create, remove,
520 def __init__(self, ui, gp, backend, store, eolmode='strict'):
513 eolmode='strict', copysource=None):
521 self.fname = gp.path
514 self.fname = fname
515 self.eolmode = eolmode
522 self.eolmode = eolmode
516 self.eol = None
523 self.eol = None
517 self.backend = backend
524 self.backend = backend
518 self.ui = ui
525 self.ui = ui
519 self.lines = []
526 self.lines = []
520 self.exists = False
527 self.exists = False
521 self.missing = True
528 self.missing = True
522 self.mode = mode
529 self.mode = gp.mode
523 self.copysource = copysource
530 self.copysource = gp.oldpath
524 self.create = create
531 self.create = gp.op in ('ADD', 'COPY', 'RENAME')
525 self.remove = remove
532 self.remove = gp.op == 'DELETE'
526 try:
533 try:
527 if copysource is None:
534 if self.copysource is None:
528 data, mode = backend.getfile(fname)
535 data, mode = backend.getfile(self.fname)
529 self.exists = True
536 self.exists = True
530 else:
537 else:
531 data, mode = store.getfile(copysource)
538 data, mode = store.getfile(self.copysource)
532 self.exists = backend.exists(fname)
539 self.exists = backend.exists(self.fname)
533 self.missing = False
540 self.missing = False
534 if data:
541 if data:
535 self.lines = data.splitlines(True)
542 self.lines = data.splitlines(True)
536 if self.mode is None:
543 if self.mode is None:
537 self.mode = mode
544 self.mode = mode
538 if self.lines:
545 if self.lines:
539 # Normalize line endings
546 # Normalize line endings
540 if self.lines[0].endswith('\r\n'):
547 if self.lines[0].endswith('\r\n'):
541 self.eol = '\r\n'
548 self.eol = '\r\n'
542 elif self.lines[0].endswith('\n'):
549 elif self.lines[0].endswith('\n'):
543 self.eol = '\n'
550 self.eol = '\n'
544 if eolmode != 'strict':
551 if eolmode != 'strict':
545 nlines = []
552 nlines = []
546 for l in self.lines:
553 for l in self.lines:
547 if l.endswith('\r\n'):
554 if l.endswith('\r\n'):
548 l = l[:-2] + '\n'
555 l = l[:-2] + '\n'
549 nlines.append(l)
556 nlines.append(l)
550 self.lines = nlines
557 self.lines = nlines
551 except IOError:
558 except IOError:
552 if create:
559 if self.create:
553 self.missing = False
560 self.missing = False
554 if self.mode is None:
561 if self.mode is None:
555 self.mode = (False, False)
562 self.mode = (False, False)
556 if self.missing:
563 if self.missing:
557 self.ui.warn(_("unable to find '%s' for patching\n") % self.fname)
564 self.ui.warn(_("unable to find '%s' for patching\n") % self.fname)
558
565
559 self.hash = {}
566 self.hash = {}
560 self.dirty = 0
567 self.dirty = 0
561 self.offset = 0
568 self.offset = 0
562 self.skew = 0
569 self.skew = 0
563 self.rej = []
570 self.rej = []
564 self.fileprinted = False
571 self.fileprinted = False
565 self.printfile(False)
572 self.printfile(False)
566 self.hunks = 0
573 self.hunks = 0
567
574
568 def writelines(self, fname, lines, mode):
575 def writelines(self, fname, lines, mode):
569 if self.eolmode == 'auto':
576 if self.eolmode == 'auto':
570 eol = self.eol
577 eol = self.eol
571 elif self.eolmode == 'crlf':
578 elif self.eolmode == 'crlf':
572 eol = '\r\n'
579 eol = '\r\n'
573 else:
580 else:
574 eol = '\n'
581 eol = '\n'
575
582
576 if self.eolmode != 'strict' and eol and eol != '\n':
583 if self.eolmode != 'strict' and eol and eol != '\n':
577 rawlines = []
584 rawlines = []
578 for l in lines:
585 for l in lines:
579 if l and l[-1] == '\n':
586 if l and l[-1] == '\n':
580 l = l[:-1] + eol
587 l = l[:-1] + eol
581 rawlines.append(l)
588 rawlines.append(l)
582 lines = rawlines
589 lines = rawlines
583
590
584 self.backend.setfile(fname, ''.join(lines), mode, self.copysource)
591 self.backend.setfile(fname, ''.join(lines), mode, self.copysource)
585
592
586 def printfile(self, warn):
593 def printfile(self, warn):
587 if self.fileprinted:
594 if self.fileprinted:
588 return
595 return
589 if warn or self.ui.verbose:
596 if warn or self.ui.verbose:
590 self.fileprinted = True
597 self.fileprinted = True
591 s = _("patching file %s\n") % self.fname
598 s = _("patching file %s\n") % self.fname
592 if warn:
599 if warn:
593 self.ui.warn(s)
600 self.ui.warn(s)
594 else:
601 else:
595 self.ui.note(s)
602 self.ui.note(s)
596
603
597
604
598 def findlines(self, l, linenum):
605 def findlines(self, l, linenum):
599 # looks through the hash and finds candidate lines. The
606 # looks through the hash and finds candidate lines. The
600 # result is a list of line numbers sorted based on distance
607 # result is a list of line numbers sorted based on distance
601 # from linenum
608 # from linenum
602
609
603 cand = self.hash.get(l, [])
610 cand = self.hash.get(l, [])
604 if len(cand) > 1:
611 if len(cand) > 1:
605 # resort our list of potentials forward then back.
612 # resort our list of potentials forward then back.
606 cand.sort(key=lambda x: abs(x - linenum))
613 cand.sort(key=lambda x: abs(x - linenum))
607 return cand
614 return cand
608
615
609 def write_rej(self):
616 def write_rej(self):
610 # our rejects are a little different from patch(1). This always
617 # our rejects are a little different from patch(1). This always
611 # creates rejects in the same form as the original patch. A file
618 # creates rejects in the same form as the original patch. A file
612 # header is inserted so that you can run the reject through patch again
619 # header is inserted so that you can run the reject through patch again
613 # without having to type the filename.
620 # without having to type the filename.
614 if not self.rej:
621 if not self.rej:
615 return
622 return
616 base = os.path.basename(self.fname)
623 base = os.path.basename(self.fname)
617 lines = ["--- %s\n+++ %s\n" % (base, base)]
624 lines = ["--- %s\n+++ %s\n" % (base, base)]
618 for x in self.rej:
625 for x in self.rej:
619 for l in x.hunk:
626 for l in x.hunk:
620 lines.append(l)
627 lines.append(l)
621 if l[-1] != '\n':
628 if l[-1] != '\n':
622 lines.append("\n\ No newline at end of file\n")
629 lines.append("\n\ No newline at end of file\n")
623 self.backend.writerej(self.fname, len(self.rej), self.hunks, lines)
630 self.backend.writerej(self.fname, len(self.rej), self.hunks, lines)
624
631
625 def apply(self, h):
632 def apply(self, h):
626 if not h.complete():
633 if not h.complete():
627 raise PatchError(_("bad hunk #%d %s (%d %d %d %d)") %
634 raise PatchError(_("bad hunk #%d %s (%d %d %d %d)") %
628 (h.number, h.desc, len(h.a), h.lena, len(h.b),
635 (h.number, h.desc, len(h.a), h.lena, len(h.b),
629 h.lenb))
636 h.lenb))
630
637
631 self.hunks += 1
638 self.hunks += 1
632
639
633 if self.missing:
640 if self.missing:
634 self.rej.append(h)
641 self.rej.append(h)
635 return -1
642 return -1
636
643
637 if self.exists and self.create:
644 if self.exists and self.create:
638 if self.copysource:
645 if self.copysource:
639 self.ui.warn(_("cannot create %s: destination already "
646 self.ui.warn(_("cannot create %s: destination already "
640 "exists\n" % self.fname))
647 "exists\n" % self.fname))
641 else:
648 else:
642 self.ui.warn(_("file %s already exists\n") % self.fname)
649 self.ui.warn(_("file %s already exists\n") % self.fname)
643 self.rej.append(h)
650 self.rej.append(h)
644 return -1
651 return -1
645
652
646 if isinstance(h, binhunk):
653 if isinstance(h, binhunk):
647 if self.remove:
654 if self.remove:
648 self.backend.unlink(self.fname)
655 self.backend.unlink(self.fname)
649 else:
656 else:
650 self.lines[:] = h.new()
657 self.lines[:] = h.new()
651 self.offset += len(h.new())
658 self.offset += len(h.new())
652 self.dirty = True
659 self.dirty = True
653 return 0
660 return 0
654
661
655 horig = h
662 horig = h
656 if (self.eolmode in ('crlf', 'lf')
663 if (self.eolmode in ('crlf', 'lf')
657 or self.eolmode == 'auto' and self.eol):
664 or self.eolmode == 'auto' and self.eol):
658 # If new eols are going to be normalized, then normalize
665 # If new eols are going to be normalized, then normalize
659 # hunk data before patching. Otherwise, preserve input
666 # hunk data before patching. Otherwise, preserve input
660 # line-endings.
667 # line-endings.
661 h = h.getnormalized()
668 h = h.getnormalized()
662
669
663 # fast case first, no offsets, no fuzz
670 # fast case first, no offsets, no fuzz
664 old = h.old()
671 old = h.old()
665 # patch starts counting at 1 unless we are adding the file
672 # patch starts counting at 1 unless we are adding the file
666 if h.starta == 0:
673 if h.starta == 0:
667 start = 0
674 start = 0
668 else:
675 else:
669 start = h.starta + self.offset - 1
676 start = h.starta + self.offset - 1
670 orig_start = start
677 orig_start = start
671 # if there's skew we want to emit the "(offset %d lines)" even
678 # if there's skew we want to emit the "(offset %d lines)" even
672 # when the hunk cleanly applies at start + skew, so skip the
679 # when the hunk cleanly applies at start + skew, so skip the
673 # fast case code
680 # fast case code
674 if self.skew == 0 and diffhelpers.testhunk(old, self.lines, start) == 0:
681 if self.skew == 0 and diffhelpers.testhunk(old, self.lines, start) == 0:
675 if self.remove:
682 if self.remove:
676 self.backend.unlink(self.fname)
683 self.backend.unlink(self.fname)
677 else:
684 else:
678 self.lines[start : start + h.lena] = h.new()
685 self.lines[start : start + h.lena] = h.new()
679 self.offset += h.lenb - h.lena
686 self.offset += h.lenb - h.lena
680 self.dirty = True
687 self.dirty = True
681 return 0
688 return 0
682
689
683 # ok, we couldn't match the hunk. Lets look for offsets and fuzz it
690 # ok, we couldn't match the hunk. Lets look for offsets and fuzz it
684 self.hash = {}
691 self.hash = {}
685 for x, s in enumerate(self.lines):
692 for x, s in enumerate(self.lines):
686 self.hash.setdefault(s, []).append(x)
693 self.hash.setdefault(s, []).append(x)
687 if h.hunk[-1][0] != ' ':
694 if h.hunk[-1][0] != ' ':
688 # if the hunk tried to put something at the bottom of the file
695 # if the hunk tried to put something at the bottom of the file
689 # override the start line and use eof here
696 # override the start line and use eof here
690 search_start = len(self.lines)
697 search_start = len(self.lines)
691 else:
698 else:
692 search_start = orig_start + self.skew
699 search_start = orig_start + self.skew
693
700
694 for fuzzlen in xrange(3):
701 for fuzzlen in xrange(3):
695 for toponly in [True, False]:
702 for toponly in [True, False]:
696 old = h.old(fuzzlen, toponly)
703 old = h.old(fuzzlen, toponly)
697
704
698 cand = self.findlines(old[0][1:], search_start)
705 cand = self.findlines(old[0][1:], search_start)
699 for l in cand:
706 for l in cand:
700 if diffhelpers.testhunk(old, self.lines, l) == 0:
707 if diffhelpers.testhunk(old, self.lines, l) == 0:
701 newlines = h.new(fuzzlen, toponly)
708 newlines = h.new(fuzzlen, toponly)
702 self.lines[l : l + len(old)] = newlines
709 self.lines[l : l + len(old)] = newlines
703 self.offset += len(newlines) - len(old)
710 self.offset += len(newlines) - len(old)
704 self.skew = l - orig_start
711 self.skew = l - orig_start
705 self.dirty = True
712 self.dirty = True
706 offset = l - orig_start - fuzzlen
713 offset = l - orig_start - fuzzlen
707 if fuzzlen:
714 if fuzzlen:
708 msg = _("Hunk #%d succeeded at %d "
715 msg = _("Hunk #%d succeeded at %d "
709 "with fuzz %d "
716 "with fuzz %d "
710 "(offset %d lines).\n")
717 "(offset %d lines).\n")
711 self.printfile(True)
718 self.printfile(True)
712 self.ui.warn(msg %
719 self.ui.warn(msg %
713 (h.number, l + 1, fuzzlen, offset))
720 (h.number, l + 1, fuzzlen, offset))
714 else:
721 else:
715 msg = _("Hunk #%d succeeded at %d "
722 msg = _("Hunk #%d succeeded at %d "
716 "(offset %d lines).\n")
723 "(offset %d lines).\n")
717 self.ui.note(msg % (h.number, l + 1, offset))
724 self.ui.note(msg % (h.number, l + 1, offset))
718 return fuzzlen
725 return fuzzlen
719 self.printfile(True)
726 self.printfile(True)
720 self.ui.warn(_("Hunk #%d FAILED at %d\n") % (h.number, orig_start))
727 self.ui.warn(_("Hunk #%d FAILED at %d\n") % (h.number, orig_start))
721 self.rej.append(horig)
728 self.rej.append(horig)
722 return -1
729 return -1
723
730
724 def close(self):
731 def close(self):
725 if self.dirty:
732 if self.dirty:
726 self.writelines(self.fname, self.lines, self.mode)
733 self.writelines(self.fname, self.lines, self.mode)
727 self.write_rej()
734 self.write_rej()
728 return len(self.rej)
735 return len(self.rej)
729
736
730 class hunk(object):
737 class hunk(object):
731 def __init__(self, desc, num, lr, context):
738 def __init__(self, desc, num, lr, context):
732 self.number = num
739 self.number = num
733 self.desc = desc
740 self.desc = desc
734 self.hunk = [desc]
741 self.hunk = [desc]
735 self.a = []
742 self.a = []
736 self.b = []
743 self.b = []
737 self.starta = self.lena = None
744 self.starta = self.lena = None
738 self.startb = self.lenb = None
745 self.startb = self.lenb = None
739 if lr is not None:
746 if lr is not None:
740 if context:
747 if context:
741 self.read_context_hunk(lr)
748 self.read_context_hunk(lr)
742 else:
749 else:
743 self.read_unified_hunk(lr)
750 self.read_unified_hunk(lr)
744
751
745 def getnormalized(self):
752 def getnormalized(self):
746 """Return a copy with line endings normalized to LF."""
753 """Return a copy with line endings normalized to LF."""
747
754
748 def normalize(lines):
755 def normalize(lines):
749 nlines = []
756 nlines = []
750 for line in lines:
757 for line in lines:
751 if line.endswith('\r\n'):
758 if line.endswith('\r\n'):
752 line = line[:-2] + '\n'
759 line = line[:-2] + '\n'
753 nlines.append(line)
760 nlines.append(line)
754 return nlines
761 return nlines
755
762
756 # Dummy object, it is rebuilt manually
763 # Dummy object, it is rebuilt manually
757 nh = hunk(self.desc, self.number, None, None)
764 nh = hunk(self.desc, self.number, None, None)
758 nh.number = self.number
765 nh.number = self.number
759 nh.desc = self.desc
766 nh.desc = self.desc
760 nh.hunk = self.hunk
767 nh.hunk = self.hunk
761 nh.a = normalize(self.a)
768 nh.a = normalize(self.a)
762 nh.b = normalize(self.b)
769 nh.b = normalize(self.b)
763 nh.starta = self.starta
770 nh.starta = self.starta
764 nh.startb = self.startb
771 nh.startb = self.startb
765 nh.lena = self.lena
772 nh.lena = self.lena
766 nh.lenb = self.lenb
773 nh.lenb = self.lenb
767 return nh
774 return nh
768
775
769 def read_unified_hunk(self, lr):
776 def read_unified_hunk(self, lr):
770 m = unidesc.match(self.desc)
777 m = unidesc.match(self.desc)
771 if not m:
778 if not m:
772 raise PatchError(_("bad hunk #%d") % self.number)
779 raise PatchError(_("bad hunk #%d") % self.number)
773 self.starta, foo, self.lena, self.startb, foo2, self.lenb = m.groups()
780 self.starta, foo, self.lena, self.startb, foo2, self.lenb = m.groups()
774 if self.lena is None:
781 if self.lena is None:
775 self.lena = 1
782 self.lena = 1
776 else:
783 else:
777 self.lena = int(self.lena)
784 self.lena = int(self.lena)
778 if self.lenb is None:
785 if self.lenb is None:
779 self.lenb = 1
786 self.lenb = 1
780 else:
787 else:
781 self.lenb = int(self.lenb)
788 self.lenb = int(self.lenb)
782 self.starta = int(self.starta)
789 self.starta = int(self.starta)
783 self.startb = int(self.startb)
790 self.startb = int(self.startb)
784 diffhelpers.addlines(lr, self.hunk, self.lena, self.lenb, self.a, self.b)
791 diffhelpers.addlines(lr, self.hunk, self.lena, self.lenb, self.a, self.b)
785 # if we hit eof before finishing out the hunk, the last line will
792 # if we hit eof before finishing out the hunk, the last line will
786 # be zero length. Lets try to fix it up.
793 # be zero length. Lets try to fix it up.
787 while len(self.hunk[-1]) == 0:
794 while len(self.hunk[-1]) == 0:
788 del self.hunk[-1]
795 del self.hunk[-1]
789 del self.a[-1]
796 del self.a[-1]
790 del self.b[-1]
797 del self.b[-1]
791 self.lena -= 1
798 self.lena -= 1
792 self.lenb -= 1
799 self.lenb -= 1
793 self._fixnewline(lr)
800 self._fixnewline(lr)
794
801
795 def read_context_hunk(self, lr):
802 def read_context_hunk(self, lr):
796 self.desc = lr.readline()
803 self.desc = lr.readline()
797 m = contextdesc.match(self.desc)
804 m = contextdesc.match(self.desc)
798 if not m:
805 if not m:
799 raise PatchError(_("bad hunk #%d") % self.number)
806 raise PatchError(_("bad hunk #%d") % self.number)
800 foo, self.starta, foo2, aend, foo3 = m.groups()
807 foo, self.starta, foo2, aend, foo3 = m.groups()
801 self.starta = int(self.starta)
808 self.starta = int(self.starta)
802 if aend is None:
809 if aend is None:
803 aend = self.starta
810 aend = self.starta
804 self.lena = int(aend) - self.starta
811 self.lena = int(aend) - self.starta
805 if self.starta:
812 if self.starta:
806 self.lena += 1
813 self.lena += 1
807 for x in xrange(self.lena):
814 for x in xrange(self.lena):
808 l = lr.readline()
815 l = lr.readline()
809 if l.startswith('---'):
816 if l.startswith('---'):
810 # lines addition, old block is empty
817 # lines addition, old block is empty
811 lr.push(l)
818 lr.push(l)
812 break
819 break
813 s = l[2:]
820 s = l[2:]
814 if l.startswith('- ') or l.startswith('! '):
821 if l.startswith('- ') or l.startswith('! '):
815 u = '-' + s
822 u = '-' + s
816 elif l.startswith(' '):
823 elif l.startswith(' '):
817 u = ' ' + s
824 u = ' ' + s
818 else:
825 else:
819 raise PatchError(_("bad hunk #%d old text line %d") %
826 raise PatchError(_("bad hunk #%d old text line %d") %
820 (self.number, x))
827 (self.number, x))
821 self.a.append(u)
828 self.a.append(u)
822 self.hunk.append(u)
829 self.hunk.append(u)
823
830
824 l = lr.readline()
831 l = lr.readline()
825 if l.startswith('\ '):
832 if l.startswith('\ '):
826 s = self.a[-1][:-1]
833 s = self.a[-1][:-1]
827 self.a[-1] = s
834 self.a[-1] = s
828 self.hunk[-1] = s
835 self.hunk[-1] = s
829 l = lr.readline()
836 l = lr.readline()
830 m = contextdesc.match(l)
837 m = contextdesc.match(l)
831 if not m:
838 if not m:
832 raise PatchError(_("bad hunk #%d") % self.number)
839 raise PatchError(_("bad hunk #%d") % self.number)
833 foo, self.startb, foo2, bend, foo3 = m.groups()
840 foo, self.startb, foo2, bend, foo3 = m.groups()
834 self.startb = int(self.startb)
841 self.startb = int(self.startb)
835 if bend is None:
842 if bend is None:
836 bend = self.startb
843 bend = self.startb
837 self.lenb = int(bend) - self.startb
844 self.lenb = int(bend) - self.startb
838 if self.startb:
845 if self.startb:
839 self.lenb += 1
846 self.lenb += 1
840 hunki = 1
847 hunki = 1
841 for x in xrange(self.lenb):
848 for x in xrange(self.lenb):
842 l = lr.readline()
849 l = lr.readline()
843 if l.startswith('\ '):
850 if l.startswith('\ '):
844 # XXX: the only way to hit this is with an invalid line range.
851 # XXX: the only way to hit this is with an invalid line range.
845 # The no-eol marker is not counted in the line range, but I
852 # The no-eol marker is not counted in the line range, but I
846 # guess there are diff(1) out there which behave differently.
853 # guess there are diff(1) out there which behave differently.
847 s = self.b[-1][:-1]
854 s = self.b[-1][:-1]
848 self.b[-1] = s
855 self.b[-1] = s
849 self.hunk[hunki - 1] = s
856 self.hunk[hunki - 1] = s
850 continue
857 continue
851 if not l:
858 if not l:
852 # line deletions, new block is empty and we hit EOF
859 # line deletions, new block is empty and we hit EOF
853 lr.push(l)
860 lr.push(l)
854 break
861 break
855 s = l[2:]
862 s = l[2:]
856 if l.startswith('+ ') or l.startswith('! '):
863 if l.startswith('+ ') or l.startswith('! '):
857 u = '+' + s
864 u = '+' + s
858 elif l.startswith(' '):
865 elif l.startswith(' '):
859 u = ' ' + s
866 u = ' ' + s
860 elif len(self.b) == 0:
867 elif len(self.b) == 0:
861 # line deletions, new block is empty
868 # line deletions, new block is empty
862 lr.push(l)
869 lr.push(l)
863 break
870 break
864 else:
871 else:
865 raise PatchError(_("bad hunk #%d old text line %d") %
872 raise PatchError(_("bad hunk #%d old text line %d") %
866 (self.number, x))
873 (self.number, x))
867 self.b.append(s)
874 self.b.append(s)
868 while True:
875 while True:
869 if hunki >= len(self.hunk):
876 if hunki >= len(self.hunk):
870 h = ""
877 h = ""
871 else:
878 else:
872 h = self.hunk[hunki]
879 h = self.hunk[hunki]
873 hunki += 1
880 hunki += 1
874 if h == u:
881 if h == u:
875 break
882 break
876 elif h.startswith('-'):
883 elif h.startswith('-'):
877 continue
884 continue
878 else:
885 else:
879 self.hunk.insert(hunki - 1, u)
886 self.hunk.insert(hunki - 1, u)
880 break
887 break
881
888
882 if not self.a:
889 if not self.a:
883 # this happens when lines were only added to the hunk
890 # this happens when lines were only added to the hunk
884 for x in self.hunk:
891 for x in self.hunk:
885 if x.startswith('-') or x.startswith(' '):
892 if x.startswith('-') or x.startswith(' '):
886 self.a.append(x)
893 self.a.append(x)
887 if not self.b:
894 if not self.b:
888 # this happens when lines were only deleted from the hunk
895 # this happens when lines were only deleted from the hunk
889 for x in self.hunk:
896 for x in self.hunk:
890 if x.startswith('+') or x.startswith(' '):
897 if x.startswith('+') or x.startswith(' '):
891 self.b.append(x[1:])
898 self.b.append(x[1:])
892 # @@ -start,len +start,len @@
899 # @@ -start,len +start,len @@
893 self.desc = "@@ -%d,%d +%d,%d @@\n" % (self.starta, self.lena,
900 self.desc = "@@ -%d,%d +%d,%d @@\n" % (self.starta, self.lena,
894 self.startb, self.lenb)
901 self.startb, self.lenb)
895 self.hunk[0] = self.desc
902 self.hunk[0] = self.desc
896 self._fixnewline(lr)
903 self._fixnewline(lr)
897
904
898 def _fixnewline(self, lr):
905 def _fixnewline(self, lr):
899 l = lr.readline()
906 l = lr.readline()
900 if l.startswith('\ '):
907 if l.startswith('\ '):
901 diffhelpers.fix_newline(self.hunk, self.a, self.b)
908 diffhelpers.fix_newline(self.hunk, self.a, self.b)
902 else:
909 else:
903 lr.push(l)
910 lr.push(l)
904
911
905 def complete(self):
912 def complete(self):
906 return len(self.a) == self.lena and len(self.b) == self.lenb
913 return len(self.a) == self.lena and len(self.b) == self.lenb
907
914
908 def fuzzit(self, l, fuzz, toponly):
915 def fuzzit(self, l, fuzz, toponly):
909 # this removes context lines from the top and bottom of list 'l'. It
916 # this removes context lines from the top and bottom of list 'l'. It
910 # checks the hunk to make sure only context lines are removed, and then
917 # checks the hunk to make sure only context lines are removed, and then
911 # returns a new shortened list of lines.
918 # returns a new shortened list of lines.
912 fuzz = min(fuzz, len(l)-1)
919 fuzz = min(fuzz, len(l)-1)
913 if fuzz:
920 if fuzz:
914 top = 0
921 top = 0
915 bot = 0
922 bot = 0
916 hlen = len(self.hunk)
923 hlen = len(self.hunk)
917 for x in xrange(hlen - 1):
924 for x in xrange(hlen - 1):
918 # the hunk starts with the @@ line, so use x+1
925 # the hunk starts with the @@ line, so use x+1
919 if self.hunk[x + 1][0] == ' ':
926 if self.hunk[x + 1][0] == ' ':
920 top += 1
927 top += 1
921 else:
928 else:
922 break
929 break
923 if not toponly:
930 if not toponly:
924 for x in xrange(hlen - 1):
931 for x in xrange(hlen - 1):
925 if self.hunk[hlen - bot - 1][0] == ' ':
932 if self.hunk[hlen - bot - 1][0] == ' ':
926 bot += 1
933 bot += 1
927 else:
934 else:
928 break
935 break
929
936
930 # top and bot now count context in the hunk
937 # top and bot now count context in the hunk
931 # adjust them if either one is short
938 # adjust them if either one is short
932 context = max(top, bot, 3)
939 context = max(top, bot, 3)
933 if bot < context:
940 if bot < context:
934 bot = max(0, fuzz - (context - bot))
941 bot = max(0, fuzz - (context - bot))
935 else:
942 else:
936 bot = min(fuzz, bot)
943 bot = min(fuzz, bot)
937 if top < context:
944 if top < context:
938 top = max(0, fuzz - (context - top))
945 top = max(0, fuzz - (context - top))
939 else:
946 else:
940 top = min(fuzz, top)
947 top = min(fuzz, top)
941
948
942 return l[top:len(l)-bot]
949 return l[top:len(l)-bot]
943 return l
950 return l
944
951
945 def old(self, fuzz=0, toponly=False):
952 def old(self, fuzz=0, toponly=False):
946 return self.fuzzit(self.a, fuzz, toponly)
953 return self.fuzzit(self.a, fuzz, toponly)
947
954
948 def new(self, fuzz=0, toponly=False):
955 def new(self, fuzz=0, toponly=False):
949 return self.fuzzit(self.b, fuzz, toponly)
956 return self.fuzzit(self.b, fuzz, toponly)
950
957
951 class binhunk:
958 class binhunk:
952 'A binary patch file. Only understands literals so far.'
959 'A binary patch file. Only understands literals so far.'
953 def __init__(self, lr):
960 def __init__(self, lr):
954 self.text = None
961 self.text = None
955 self.hunk = ['GIT binary patch\n']
962 self.hunk = ['GIT binary patch\n']
956 self._read(lr)
963 self._read(lr)
957
964
958 def complete(self):
965 def complete(self):
959 return self.text is not None
966 return self.text is not None
960
967
961 def new(self):
968 def new(self):
962 return [self.text]
969 return [self.text]
963
970
964 def _read(self, lr):
971 def _read(self, lr):
965 line = lr.readline()
972 line = lr.readline()
966 self.hunk.append(line)
973 self.hunk.append(line)
967 while line and not line.startswith('literal '):
974 while line and not line.startswith('literal '):
968 line = lr.readline()
975 line = lr.readline()
969 self.hunk.append(line)
976 self.hunk.append(line)
970 if not line:
977 if not line:
971 raise PatchError(_('could not extract binary patch'))
978 raise PatchError(_('could not extract binary patch'))
972 size = int(line[8:].rstrip())
979 size = int(line[8:].rstrip())
973 dec = []
980 dec = []
974 line = lr.readline()
981 line = lr.readline()
975 self.hunk.append(line)
982 self.hunk.append(line)
976 while len(line) > 1:
983 while len(line) > 1:
977 l = line[0]
984 l = line[0]
978 if l <= 'Z' and l >= 'A':
985 if l <= 'Z' and l >= 'A':
979 l = ord(l) - ord('A') + 1
986 l = ord(l) - ord('A') + 1
980 else:
987 else:
981 l = ord(l) - ord('a') + 27
988 l = ord(l) - ord('a') + 27
982 dec.append(base85.b85decode(line[1:-1])[:l])
989 dec.append(base85.b85decode(line[1:-1])[:l])
983 line = lr.readline()
990 line = lr.readline()
984 self.hunk.append(line)
991 self.hunk.append(line)
985 text = zlib.decompress(''.join(dec))
992 text = zlib.decompress(''.join(dec))
986 if len(text) != size:
993 if len(text) != size:
987 raise PatchError(_('binary patch is %d bytes, not %d') %
994 raise PatchError(_('binary patch is %d bytes, not %d') %
988 len(text), size)
995 len(text), size)
989 self.text = text
996 self.text = text
990
997
991 def parsefilename(str):
998 def parsefilename(str):
992 # --- filename \t|space stuff
999 # --- filename \t|space stuff
993 s = str[4:].rstrip('\r\n')
1000 s = str[4:].rstrip('\r\n')
994 i = s.find('\t')
1001 i = s.find('\t')
995 if i < 0:
1002 if i < 0:
996 i = s.find(' ')
1003 i = s.find(' ')
997 if i < 0:
1004 if i < 0:
998 return s
1005 return s
999 return s[:i]
1006 return s[:i]
1000
1007
1001 def pathstrip(path, strip):
1008 def pathstrip(path, strip):
1002 pathlen = len(path)
1009 pathlen = len(path)
1003 i = 0
1010 i = 0
1004 if strip == 0:
1011 if strip == 0:
1005 return '', path.rstrip()
1012 return '', path.rstrip()
1006 count = strip
1013 count = strip
1007 while count > 0:
1014 while count > 0:
1008 i = path.find('/', i)
1015 i = path.find('/', i)
1009 if i == -1:
1016 if i == -1:
1010 raise PatchError(_("unable to strip away %d of %d dirs from %s") %
1017 raise PatchError(_("unable to strip away %d of %d dirs from %s") %
1011 (count, strip, path))
1018 (count, strip, path))
1012 i += 1
1019 i += 1
1013 # consume '//' in the path
1020 # consume '//' in the path
1014 while i < pathlen - 1 and path[i] == '/':
1021 while i < pathlen - 1 and path[i] == '/':
1015 i += 1
1022 i += 1
1016 count -= 1
1023 count -= 1
1017 return path[:i].lstrip(), path[i:].rstrip()
1024 return path[:i].lstrip(), path[i:].rstrip()
1018
1025
1019 def selectfile(backend, afile_orig, bfile_orig, hunk, strip, gp):
1026 def makepatchmeta(backend, afile_orig, bfile_orig, hunk, strip):
1020 if gp:
1021 # Git patches do not play games. Excluding copies from the
1022 # following heuristic avoids a lot of confusion
1023 fname = pathstrip(gp.path, strip - 1)[1]
1024 create = gp.op in ('ADD', 'COPY', 'RENAME')
1025 remove = gp.op == 'DELETE'
1026 return fname, create, remove
1027 nulla = afile_orig == "/dev/null"
1027 nulla = afile_orig == "/dev/null"
1028 nullb = bfile_orig == "/dev/null"
1028 nullb = bfile_orig == "/dev/null"
1029 create = nulla and hunk.starta == 0 and hunk.lena == 0
1029 create = nulla and hunk.starta == 0 and hunk.lena == 0
1030 remove = nullb and hunk.startb == 0 and hunk.lenb == 0
1030 remove = nullb and hunk.startb == 0 and hunk.lenb == 0
1031 abase, afile = pathstrip(afile_orig, strip)
1031 abase, afile = pathstrip(afile_orig, strip)
1032 gooda = not nulla and backend.exists(afile)
1032 gooda = not nulla and backend.exists(afile)
1033 bbase, bfile = pathstrip(bfile_orig, strip)
1033 bbase, bfile = pathstrip(bfile_orig, strip)
1034 if afile == bfile:
1034 if afile == bfile:
1035 goodb = gooda
1035 goodb = gooda
1036 else:
1036 else:
1037 goodb = not nullb and backend.exists(bfile)
1037 goodb = not nullb and backend.exists(bfile)
1038 missing = not goodb and not gooda and not create
1038 missing = not goodb and not gooda and not create
1039
1039
1040 # some diff programs apparently produce patches where the afile is
1040 # some diff programs apparently produce patches where the afile is
1041 # not /dev/null, but afile starts with bfile
1041 # not /dev/null, but afile starts with bfile
1042 abasedir = afile[:afile.rfind('/') + 1]
1042 abasedir = afile[:afile.rfind('/') + 1]
1043 bbasedir = bfile[:bfile.rfind('/') + 1]
1043 bbasedir = bfile[:bfile.rfind('/') + 1]
1044 if (missing and abasedir == bbasedir and afile.startswith(bfile)
1044 if (missing and abasedir == bbasedir and afile.startswith(bfile)
1045 and hunk.starta == 0 and hunk.lena == 0):
1045 and hunk.starta == 0 and hunk.lena == 0):
1046 create = True
1046 create = True
1047 missing = False
1047 missing = False
1048
1048
1049 # If afile is "a/b/foo" and bfile is "a/b/foo.orig" we assume the
1049 # If afile is "a/b/foo" and bfile is "a/b/foo.orig" we assume the
1050 # diff is between a file and its backup. In this case, the original
1050 # diff is between a file and its backup. In this case, the original
1051 # file should be patched (see original mpatch code).
1051 # file should be patched (see original mpatch code).
1052 isbackup = (abase == bbase and bfile.startswith(afile))
1052 isbackup = (abase == bbase and bfile.startswith(afile))
1053 fname = None
1053 fname = None
1054 if not missing:
1054 if not missing:
1055 if gooda and goodb:
1055 if gooda and goodb:
1056 fname = isbackup and afile or bfile
1056 fname = isbackup and afile or bfile
1057 elif gooda:
1057 elif gooda:
1058 fname = afile
1058 fname = afile
1059
1059
1060 if not fname:
1060 if not fname:
1061 if not nullb:
1061 if not nullb:
1062 fname = isbackup and afile or bfile
1062 fname = isbackup and afile or bfile
1063 elif not nulla:
1063 elif not nulla:
1064 fname = afile
1064 fname = afile
1065 else:
1065 else:
1066 raise PatchError(_("undefined source and destination files"))
1066 raise PatchError(_("undefined source and destination files"))
1067
1067
1068 return fname, create, remove
1068 gp = patchmeta(fname)
1069 if create:
1070 gp.op = 'ADD'
1071 elif remove:
1072 gp.op = 'DELETE'
1073 return gp
1069
1074
1070 def scangitpatch(lr, firstline):
1075 def scangitpatch(lr, firstline):
1071 """
1076 """
1072 Git patches can emit:
1077 Git patches can emit:
1073 - rename a to b
1078 - rename a to b
1074 - change b
1079 - change b
1075 - copy a to c
1080 - copy a to c
1076 - change c
1081 - change c
1077
1082
1078 We cannot apply this sequence as-is, the renamed 'a' could not be
1083 We cannot apply this sequence as-is, the renamed 'a' could not be
1079 found for it would have been renamed already. And we cannot copy
1084 found for it would have been renamed already. And we cannot copy
1080 from 'b' instead because 'b' would have been changed already. So
1085 from 'b' instead because 'b' would have been changed already. So
1081 we scan the git patch for copy and rename commands so we can
1086 we scan the git patch for copy and rename commands so we can
1082 perform the copies ahead of time.
1087 perform the copies ahead of time.
1083 """
1088 """
1084 pos = 0
1089 pos = 0
1085 try:
1090 try:
1086 pos = lr.fp.tell()
1091 pos = lr.fp.tell()
1087 fp = lr.fp
1092 fp = lr.fp
1088 except IOError:
1093 except IOError:
1089 fp = cStringIO.StringIO(lr.fp.read())
1094 fp = cStringIO.StringIO(lr.fp.read())
1090 gitlr = linereader(fp)
1095 gitlr = linereader(fp)
1091 gitlr.push(firstline)
1096 gitlr.push(firstline)
1092 gitpatches = readgitpatch(gitlr)
1097 gitpatches = readgitpatch(gitlr)
1093 fp.seek(pos)
1098 fp.seek(pos)
1094 return gitpatches
1099 return gitpatches
1095
1100
1096 def iterhunks(fp):
1101 def iterhunks(fp):
1097 """Read a patch and yield the following events:
1102 """Read a patch and yield the following events:
1098 - ("file", afile, bfile, firsthunk): select a new target file.
1103 - ("file", afile, bfile, firsthunk): select a new target file.
1099 - ("hunk", hunk): a new hunk is ready to be applied, follows a
1104 - ("hunk", hunk): a new hunk is ready to be applied, follows a
1100 "file" event.
1105 "file" event.
1101 - ("git", gitchanges): current diff is in git format, gitchanges
1106 - ("git", gitchanges): current diff is in git format, gitchanges
1102 maps filenames to gitpatch records. Unique event.
1107 maps filenames to gitpatch records. Unique event.
1103 """
1108 """
1104 afile = ""
1109 afile = ""
1105 bfile = ""
1110 bfile = ""
1106 state = None
1111 state = None
1107 hunknum = 0
1112 hunknum = 0
1108 emitfile = newfile = False
1113 emitfile = newfile = False
1109 gitpatches = None
1114 gitpatches = None
1110
1115
1111 # our states
1116 # our states
1112 BFILE = 1
1117 BFILE = 1
1113 context = None
1118 context = None
1114 lr = linereader(fp)
1119 lr = linereader(fp)
1115
1120
1116 while True:
1121 while True:
1117 x = lr.readline()
1122 x = lr.readline()
1118 if not x:
1123 if not x:
1119 break
1124 break
1120 if state == BFILE and (
1125 if state == BFILE and (
1121 (not context and x[0] == '@')
1126 (not context and x[0] == '@')
1122 or (context is not False and x.startswith('***************'))
1127 or (context is not False and x.startswith('***************'))
1123 or x.startswith('GIT binary patch')):
1128 or x.startswith('GIT binary patch')):
1124 gp = None
1129 gp = None
1125 if (gitpatches and
1130 if (gitpatches and
1126 (gitpatches[-1][0] == afile or gitpatches[-1][1] == bfile)):
1131 (gitpatches[-1][0] == afile or gitpatches[-1][1] == bfile)):
1127 gp = gitpatches.pop()[2]
1132 gp = gitpatches.pop()[2]
1128 if x.startswith('GIT binary patch'):
1133 if x.startswith('GIT binary patch'):
1129 h = binhunk(lr)
1134 h = binhunk(lr)
1130 else:
1135 else:
1131 if context is None and x.startswith('***************'):
1136 if context is None and x.startswith('***************'):
1132 context = True
1137 context = True
1133 h = hunk(x, hunknum + 1, lr, context)
1138 h = hunk(x, hunknum + 1, lr, context)
1134 hunknum += 1
1139 hunknum += 1
1135 if emitfile:
1140 if emitfile:
1136 emitfile = False
1141 emitfile = False
1137 yield 'file', (afile, bfile, h, gp)
1142 yield 'file', (afile, bfile, h, gp and gp.copy() or None)
1138 yield 'hunk', h
1143 yield 'hunk', h
1139 elif x.startswith('diff --git'):
1144 elif x.startswith('diff --git'):
1140 m = gitre.match(x)
1145 m = gitre.match(x)
1141 if not m:
1146 if not m:
1142 continue
1147 continue
1143 if gitpatches is None:
1148 if gitpatches is None:
1144 # scan whole input for git metadata
1149 # scan whole input for git metadata
1145 gitpatches = [('a/' + gp.path, 'b/' + gp.path, gp) for gp
1150 gitpatches = [('a/' + gp.path, 'b/' + gp.path, gp) for gp
1146 in scangitpatch(lr, x)]
1151 in scangitpatch(lr, x)]
1147 yield 'git', [g[2] for g in gitpatches
1152 yield 'git', [g[2].copy() for g in gitpatches
1148 if g[2].op in ('COPY', 'RENAME')]
1153 if g[2].op in ('COPY', 'RENAME')]
1149 gitpatches.reverse()
1154 gitpatches.reverse()
1150 afile = 'a/' + m.group(1)
1155 afile = 'a/' + m.group(1)
1151 bfile = 'b/' + m.group(2)
1156 bfile = 'b/' + m.group(2)
1152 while afile != gitpatches[-1][0] and bfile != gitpatches[-1][1]:
1157 while afile != gitpatches[-1][0] and bfile != gitpatches[-1][1]:
1153 gp = gitpatches.pop()[2]
1158 gp = gitpatches.pop()[2]
1154 yield 'file', ('a/' + gp.path, 'b/' + gp.path, None, gp)
1159 yield 'file', ('a/' + gp.path, 'b/' + gp.path, None, gp.copy())
1155 gp = gitpatches[-1][2]
1160 gp = gitpatches[-1][2]
1156 # copy/rename + modify should modify target, not source
1161 # copy/rename + modify should modify target, not source
1157 if gp.op in ('COPY', 'DELETE', 'RENAME', 'ADD') or gp.mode:
1162 if gp.op in ('COPY', 'DELETE', 'RENAME', 'ADD') or gp.mode:
1158 afile = bfile
1163 afile = bfile
1159 newfile = True
1164 newfile = True
1160 elif x.startswith('---'):
1165 elif x.startswith('---'):
1161 # check for a unified diff
1166 # check for a unified diff
1162 l2 = lr.readline()
1167 l2 = lr.readline()
1163 if not l2.startswith('+++'):
1168 if not l2.startswith('+++'):
1164 lr.push(l2)
1169 lr.push(l2)
1165 continue
1170 continue
1166 newfile = True
1171 newfile = True
1167 context = False
1172 context = False
1168 afile = parsefilename(x)
1173 afile = parsefilename(x)
1169 bfile = parsefilename(l2)
1174 bfile = parsefilename(l2)
1170 elif x.startswith('***'):
1175 elif x.startswith('***'):
1171 # check for a context diff
1176 # check for a context diff
1172 l2 = lr.readline()
1177 l2 = lr.readline()
1173 if not l2.startswith('---'):
1178 if not l2.startswith('---'):
1174 lr.push(l2)
1179 lr.push(l2)
1175 continue
1180 continue
1176 l3 = lr.readline()
1181 l3 = lr.readline()
1177 lr.push(l3)
1182 lr.push(l3)
1178 if not l3.startswith("***************"):
1183 if not l3.startswith("***************"):
1179 lr.push(l2)
1184 lr.push(l2)
1180 continue
1185 continue
1181 newfile = True
1186 newfile = True
1182 context = True
1187 context = True
1183 afile = parsefilename(x)
1188 afile = parsefilename(x)
1184 bfile = parsefilename(l2)
1189 bfile = parsefilename(l2)
1185
1190
1186 if newfile:
1191 if newfile:
1187 newfile = False
1192 newfile = False
1188 emitfile = True
1193 emitfile = True
1189 state = BFILE
1194 state = BFILE
1190 hunknum = 0
1195 hunknum = 0
1191
1196
1192 while gitpatches:
1197 while gitpatches:
1193 gp = gitpatches.pop()[2]
1198 gp = gitpatches.pop()[2]
1194 yield 'file', ('a/' + gp.path, 'b/' + gp.path, None, gp)
1199 yield 'file', ('a/' + gp.path, 'b/' + gp.path, None, gp.copy())
1195
1200
1196 def applydiff(ui, fp, backend, store, strip=1, eolmode='strict'):
1201 def applydiff(ui, fp, backend, store, strip=1, eolmode='strict'):
1197 """Reads a patch from fp and tries to apply it.
1202 """Reads a patch from fp and tries to apply it.
1198
1203
1199 Returns 0 for a clean patch, -1 if any rejects were found and 1 if
1204 Returns 0 for a clean patch, -1 if any rejects were found and 1 if
1200 there was any fuzz.
1205 there was any fuzz.
1201
1206
1202 If 'eolmode' is 'strict', the patch content and patched file are
1207 If 'eolmode' is 'strict', the patch content and patched file are
1203 read in binary mode. Otherwise, line endings are ignored when
1208 read in binary mode. Otherwise, line endings are ignored when
1204 patching then normalized according to 'eolmode'.
1209 patching then normalized according to 'eolmode'.
1205 """
1210 """
1206 return _applydiff(ui, fp, patchfile, backend, store, strip=strip,
1211 return _applydiff(ui, fp, patchfile, backend, store, strip=strip,
1207 eolmode=eolmode)
1212 eolmode=eolmode)
1208
1213
1209 def _applydiff(ui, fp, patcher, backend, store, strip=1,
1214 def _applydiff(ui, fp, patcher, backend, store, strip=1,
1210 eolmode='strict'):
1215 eolmode='strict'):
1211
1216
1212 def pstrip(p):
1217 def pstrip(p):
1213 return pathstrip(p, strip - 1)[1]
1218 return pathstrip(p, strip - 1)[1]
1214
1219
1215 rejects = 0
1220 rejects = 0
1216 err = 0
1221 err = 0
1217 current_file = None
1222 current_file = None
1218
1223
1219 for state, values in iterhunks(fp):
1224 for state, values in iterhunks(fp):
1220 if state == 'hunk':
1225 if state == 'hunk':
1221 if not current_file:
1226 if not current_file:
1222 continue
1227 continue
1223 ret = current_file.apply(values)
1228 ret = current_file.apply(values)
1224 if ret > 0:
1229 if ret > 0:
1225 err = 1
1230 err = 1
1226 elif state == 'file':
1231 elif state == 'file':
1227 if current_file:
1232 if current_file:
1228 rejects += current_file.close()
1233 rejects += current_file.close()
1229 current_file = None
1234 current_file = None
1230 afile, bfile, first_hunk, gp = values
1235 afile, bfile, first_hunk, gp = values
1231 copysource = None
1232 if gp:
1236 if gp:
1233 path = pstrip(gp.path)
1237 path = pstrip(gp.path)
1238 gp.path = pstrip(gp.path)
1234 if gp.oldpath:
1239 if gp.oldpath:
1235 copysource = pstrip(gp.oldpath)
1240 gp.oldpath = pstrip(gp.oldpath)
1241 else:
1242 gp = makepatchmeta(backend, afile, bfile, first_hunk, strip)
1236 if gp.op == 'RENAME':
1243 if gp.op == 'RENAME':
1237 backend.unlink(copysource)
1244 backend.unlink(gp.oldpath)
1238 if not first_hunk:
1245 if not first_hunk:
1239 if gp.op == 'DELETE':
1246 if gp.op == 'DELETE':
1240 backend.unlink(path)
1247 backend.unlink(gp.path)
1241 continue
1248 continue
1242 data, mode = None, None
1249 data, mode = None, None
1243 if gp.op in ('RENAME', 'COPY'):
1250 if gp.op in ('RENAME', 'COPY'):
1244 data, mode = store.getfile(copysource)
1251 data, mode = store.getfile(gp.oldpath)
1245 if gp.mode:
1252 if gp.mode:
1246 mode = gp.mode
1253 mode = gp.mode
1247 if gp.op == 'ADD':
1254 if gp.op == 'ADD':
1248 # Added files without content have no hunk and
1255 # Added files without content have no hunk and
1249 # must be created
1256 # must be created
1250 data = ''
1257 data = ''
1251 if data or mode:
1258 if data or mode:
1252 if (gp.op in ('ADD', 'RENAME', 'COPY')
1259 if (gp.op in ('ADD', 'RENAME', 'COPY')
1253 and backend.exists(path)):
1260 and backend.exists(gp.path)):
1254 raise PatchError(_("cannot create %s: destination "
1261 raise PatchError(_("cannot create %s: destination "
1255 "already exists") % path)
1262 "already exists") % gp.path)
1256 backend.setfile(path, data, mode, copysource)
1263 backend.setfile(gp.path, data, mode, gp.oldpath)
1257 if not first_hunk:
1258 continue
1264 continue
1259 try:
1265 try:
1260 mode = gp and gp.mode or None
1266 current_file = patcher(ui, gp, backend, store,
1261 current_file, create, remove = selectfile(
1267 eolmode=eolmode)
1262 backend, afile, bfile, first_hunk, strip, gp)
1263 current_file = patcher(ui, current_file, backend, store, mode,
1264 create, remove, eolmode=eolmode,
1265 copysource=copysource)
1266 except PatchError, inst:
1268 except PatchError, inst:
1267 ui.warn(str(inst) + '\n')
1269 ui.warn(str(inst) + '\n')
1268 current_file = None
1270 current_file = None
1269 rejects += 1
1271 rejects += 1
1270 continue
1272 continue
1271 elif state == 'git':
1273 elif state == 'git':
1272 for gp in values:
1274 for gp in values:
1273 path = pstrip(gp.oldpath)
1275 path = pstrip(gp.oldpath)
1274 data, mode = backend.getfile(path)
1276 data, mode = backend.getfile(path)
1275 store.setfile(path, data, mode)
1277 store.setfile(path, data, mode)
1276 else:
1278 else:
1277 raise util.Abort(_('unsupported parser state: %s') % state)
1279 raise util.Abort(_('unsupported parser state: %s') % state)
1278
1280
1279 if current_file:
1281 if current_file:
1280 rejects += current_file.close()
1282 rejects += current_file.close()
1281
1283
1282 if rejects:
1284 if rejects:
1283 return -1
1285 return -1
1284 return err
1286 return err
1285
1287
1286 def _externalpatch(ui, repo, patcher, patchname, strip, files,
1288 def _externalpatch(ui, repo, patcher, patchname, strip, files,
1287 similarity):
1289 similarity):
1288 """use <patcher> to apply <patchname> to the working directory.
1290 """use <patcher> to apply <patchname> to the working directory.
1289 returns whether patch was applied with fuzz factor."""
1291 returns whether patch was applied with fuzz factor."""
1290
1292
1291 fuzz = False
1293 fuzz = False
1292 args = []
1294 args = []
1293 cwd = repo.root
1295 cwd = repo.root
1294 if cwd:
1296 if cwd:
1295 args.append('-d %s' % util.shellquote(cwd))
1297 args.append('-d %s' % util.shellquote(cwd))
1296 fp = util.popen('%s %s -p%d < %s' % (patcher, ' '.join(args), strip,
1298 fp = util.popen('%s %s -p%d < %s' % (patcher, ' '.join(args), strip,
1297 util.shellquote(patchname)))
1299 util.shellquote(patchname)))
1298 try:
1300 try:
1299 for line in fp:
1301 for line in fp:
1300 line = line.rstrip()
1302 line = line.rstrip()
1301 ui.note(line + '\n')
1303 ui.note(line + '\n')
1302 if line.startswith('patching file '):
1304 if line.startswith('patching file '):
1303 pf = util.parsepatchoutput(line)
1305 pf = util.parsepatchoutput(line)
1304 printed_file = False
1306 printed_file = False
1305 files.add(pf)
1307 files.add(pf)
1306 elif line.find('with fuzz') >= 0:
1308 elif line.find('with fuzz') >= 0:
1307 fuzz = True
1309 fuzz = True
1308 if not printed_file:
1310 if not printed_file:
1309 ui.warn(pf + '\n')
1311 ui.warn(pf + '\n')
1310 printed_file = True
1312 printed_file = True
1311 ui.warn(line + '\n')
1313 ui.warn(line + '\n')
1312 elif line.find('saving rejects to file') >= 0:
1314 elif line.find('saving rejects to file') >= 0:
1313 ui.warn(line + '\n')
1315 ui.warn(line + '\n')
1314 elif line.find('FAILED') >= 0:
1316 elif line.find('FAILED') >= 0:
1315 if not printed_file:
1317 if not printed_file:
1316 ui.warn(pf + '\n')
1318 ui.warn(pf + '\n')
1317 printed_file = True
1319 printed_file = True
1318 ui.warn(line + '\n')
1320 ui.warn(line + '\n')
1319 finally:
1321 finally:
1320 if files:
1322 if files:
1321 cfiles = list(files)
1323 cfiles = list(files)
1322 cwd = repo.getcwd()
1324 cwd = repo.getcwd()
1323 if cwd:
1325 if cwd:
1324 cfiles = [util.pathto(repo.root, cwd, f)
1326 cfiles = [util.pathto(repo.root, cwd, f)
1325 for f in cfile]
1327 for f in cfile]
1326 scmutil.addremove(repo, cfiles, similarity=similarity)
1328 scmutil.addremove(repo, cfiles, similarity=similarity)
1327 code = fp.close()
1329 code = fp.close()
1328 if code:
1330 if code:
1329 raise PatchError(_("patch command failed: %s") %
1331 raise PatchError(_("patch command failed: %s") %
1330 util.explainexit(code)[0])
1332 util.explainexit(code)[0])
1331 return fuzz
1333 return fuzz
1332
1334
1333 def internalpatch(ui, repo, patchobj, strip, files=None, eolmode='strict',
1335 def internalpatch(ui, repo, patchobj, strip, files=None, eolmode='strict',
1334 similarity=0):
1336 similarity=0):
1335 """use builtin patch to apply <patchobj> to the working directory.
1337 """use builtin patch to apply <patchobj> to the working directory.
1336 returns whether patch was applied with fuzz factor."""
1338 returns whether patch was applied with fuzz factor."""
1337
1339
1338 if files is None:
1340 if files is None:
1339 files = set()
1341 files = set()
1340 if eolmode is None:
1342 if eolmode is None:
1341 eolmode = ui.config('patch', 'eol', 'strict')
1343 eolmode = ui.config('patch', 'eol', 'strict')
1342 if eolmode.lower() not in eolmodes:
1344 if eolmode.lower() not in eolmodes:
1343 raise util.Abort(_('unsupported line endings type: %s') % eolmode)
1345 raise util.Abort(_('unsupported line endings type: %s') % eolmode)
1344 eolmode = eolmode.lower()
1346 eolmode = eolmode.lower()
1345
1347
1346 store = filestore()
1348 store = filestore()
1347 backend = workingbackend(ui, repo, similarity)
1349 backend = workingbackend(ui, repo, similarity)
1348 try:
1350 try:
1349 fp = open(patchobj, 'rb')
1351 fp = open(patchobj, 'rb')
1350 except TypeError:
1352 except TypeError:
1351 fp = patchobj
1353 fp = patchobj
1352 try:
1354 try:
1353 ret = applydiff(ui, fp, backend, store, strip=strip,
1355 ret = applydiff(ui, fp, backend, store, strip=strip,
1354 eolmode=eolmode)
1356 eolmode=eolmode)
1355 finally:
1357 finally:
1356 if fp != patchobj:
1358 if fp != patchobj:
1357 fp.close()
1359 fp.close()
1358 files.update(backend.close())
1360 files.update(backend.close())
1359 store.close()
1361 store.close()
1360 if ret < 0:
1362 if ret < 0:
1361 raise PatchError(_('patch failed to apply'))
1363 raise PatchError(_('patch failed to apply'))
1362 return ret > 0
1364 return ret > 0
1363
1365
1364 def patch(ui, repo, patchname, strip=1, files=None, eolmode='strict',
1366 def patch(ui, repo, patchname, strip=1, files=None, eolmode='strict',
1365 similarity=0):
1367 similarity=0):
1366 """Apply <patchname> to the working directory.
1368 """Apply <patchname> to the working directory.
1367
1369
1368 'eolmode' specifies how end of lines should be handled. It can be:
1370 'eolmode' specifies how end of lines should be handled. It can be:
1369 - 'strict': inputs are read in binary mode, EOLs are preserved
1371 - 'strict': inputs are read in binary mode, EOLs are preserved
1370 - 'crlf': EOLs are ignored when patching and reset to CRLF
1372 - 'crlf': EOLs are ignored when patching and reset to CRLF
1371 - 'lf': EOLs are ignored when patching and reset to LF
1373 - 'lf': EOLs are ignored when patching and reset to LF
1372 - None: get it from user settings, default to 'strict'
1374 - None: get it from user settings, default to 'strict'
1373 'eolmode' is ignored when using an external patcher program.
1375 'eolmode' is ignored when using an external patcher program.
1374
1376
1375 Returns whether patch was applied with fuzz factor.
1377 Returns whether patch was applied with fuzz factor.
1376 """
1378 """
1377 patcher = ui.config('ui', 'patch')
1379 patcher = ui.config('ui', 'patch')
1378 if files is None:
1380 if files is None:
1379 files = set()
1381 files = set()
1380 try:
1382 try:
1381 if patcher:
1383 if patcher:
1382 return _externalpatch(ui, repo, patcher, patchname, strip,
1384 return _externalpatch(ui, repo, patcher, patchname, strip,
1383 files, similarity)
1385 files, similarity)
1384 return internalpatch(ui, repo, patchname, strip, files, eolmode,
1386 return internalpatch(ui, repo, patchname, strip, files, eolmode,
1385 similarity)
1387 similarity)
1386 except PatchError, err:
1388 except PatchError, err:
1387 raise util.Abort(str(err))
1389 raise util.Abort(str(err))
1388
1390
1389 def changedfiles(ui, repo, patchpath, strip=1):
1391 def changedfiles(ui, repo, patchpath, strip=1):
1390 backend = fsbackend(ui, repo.root)
1392 backend = fsbackend(ui, repo.root)
1391 fp = open(patchpath, 'rb')
1393 fp = open(patchpath, 'rb')
1392 try:
1394 try:
1393 changed = set()
1395 changed = set()
1394 for state, values in iterhunks(fp):
1396 for state, values in iterhunks(fp):
1395 if state == 'file':
1397 if state == 'file':
1396 afile, bfile, first_hunk, gp = values
1398 afile, bfile, first_hunk, gp = values
1397 if gp:
1399 if gp:
1398 changed.add(pathstrip(gp.path, strip - 1)[1])
1400 gp.path = pathstrip(gp.path, strip - 1)[1]
1401 if gp.oldpath:
1402 gp.oldpath = pathstrip(gp.oldpath, strip - 1)[1]
1403 else:
1404 gp = makepatchmeta(backend, afile, bfile, first_hunk, strip)
1405 changed.add(gp.path)
1399 if gp.op == 'RENAME':
1406 if gp.op == 'RENAME':
1400 changed.add(pathstrip(gp.oldpath, strip - 1)[1])
1407 changed.add(gp.oldpath)
1401 if not first_hunk:
1402 continue
1403 current_file, create, remove = selectfile(
1404 backend, afile, bfile, first_hunk, strip, gp)
1405 changed.add(current_file)
1406 elif state not in ('hunk', 'git'):
1408 elif state not in ('hunk', 'git'):
1407 raise util.Abort(_('unsupported parser state: %s') % state)
1409 raise util.Abort(_('unsupported parser state: %s') % state)
1408 return changed
1410 return changed
1409 finally:
1411 finally:
1410 fp.close()
1412 fp.close()
1411
1413
1412 def b85diff(to, tn):
1414 def b85diff(to, tn):
1413 '''print base85-encoded binary diff'''
1415 '''print base85-encoded binary diff'''
1414 def gitindex(text):
1416 def gitindex(text):
1415 if not text:
1417 if not text:
1416 return hex(nullid)
1418 return hex(nullid)
1417 l = len(text)
1419 l = len(text)
1418 s = util.sha1('blob %d\0' % l)
1420 s = util.sha1('blob %d\0' % l)
1419 s.update(text)
1421 s.update(text)
1420 return s.hexdigest()
1422 return s.hexdigest()
1421
1423
1422 def fmtline(line):
1424 def fmtline(line):
1423 l = len(line)
1425 l = len(line)
1424 if l <= 26:
1426 if l <= 26:
1425 l = chr(ord('A') + l - 1)
1427 l = chr(ord('A') + l - 1)
1426 else:
1428 else:
1427 l = chr(l - 26 + ord('a') - 1)
1429 l = chr(l - 26 + ord('a') - 1)
1428 return '%c%s\n' % (l, base85.b85encode(line, True))
1430 return '%c%s\n' % (l, base85.b85encode(line, True))
1429
1431
1430 def chunk(text, csize=52):
1432 def chunk(text, csize=52):
1431 l = len(text)
1433 l = len(text)
1432 i = 0
1434 i = 0
1433 while i < l:
1435 while i < l:
1434 yield text[i:i + csize]
1436 yield text[i:i + csize]
1435 i += csize
1437 i += csize
1436
1438
1437 tohash = gitindex(to)
1439 tohash = gitindex(to)
1438 tnhash = gitindex(tn)
1440 tnhash = gitindex(tn)
1439 if tohash == tnhash:
1441 if tohash == tnhash:
1440 return ""
1442 return ""
1441
1443
1442 # TODO: deltas
1444 # TODO: deltas
1443 ret = ['index %s..%s\nGIT binary patch\nliteral %s\n' %
1445 ret = ['index %s..%s\nGIT binary patch\nliteral %s\n' %
1444 (tohash, tnhash, len(tn))]
1446 (tohash, tnhash, len(tn))]
1445 for l in chunk(zlib.compress(tn)):
1447 for l in chunk(zlib.compress(tn)):
1446 ret.append(fmtline(l))
1448 ret.append(fmtline(l))
1447 ret.append('\n')
1449 ret.append('\n')
1448 return ''.join(ret)
1450 return ''.join(ret)
1449
1451
1450 class GitDiffRequired(Exception):
1452 class GitDiffRequired(Exception):
1451 pass
1453 pass
1452
1454
1453 def diffopts(ui, opts=None, untrusted=False):
1455 def diffopts(ui, opts=None, untrusted=False):
1454 def get(key, name=None, getter=ui.configbool):
1456 def get(key, name=None, getter=ui.configbool):
1455 return ((opts and opts.get(key)) or
1457 return ((opts and opts.get(key)) or
1456 getter('diff', name or key, None, untrusted=untrusted))
1458 getter('diff', name or key, None, untrusted=untrusted))
1457 return mdiff.diffopts(
1459 return mdiff.diffopts(
1458 text=opts and opts.get('text'),
1460 text=opts and opts.get('text'),
1459 git=get('git'),
1461 git=get('git'),
1460 nodates=get('nodates'),
1462 nodates=get('nodates'),
1461 showfunc=get('show_function', 'showfunc'),
1463 showfunc=get('show_function', 'showfunc'),
1462 ignorews=get('ignore_all_space', 'ignorews'),
1464 ignorews=get('ignore_all_space', 'ignorews'),
1463 ignorewsamount=get('ignore_space_change', 'ignorewsamount'),
1465 ignorewsamount=get('ignore_space_change', 'ignorewsamount'),
1464 ignoreblanklines=get('ignore_blank_lines', 'ignoreblanklines'),
1466 ignoreblanklines=get('ignore_blank_lines', 'ignoreblanklines'),
1465 context=get('unified', getter=ui.config))
1467 context=get('unified', getter=ui.config))
1466
1468
1467 def diff(repo, node1=None, node2=None, match=None, changes=None, opts=None,
1469 def diff(repo, node1=None, node2=None, match=None, changes=None, opts=None,
1468 losedatafn=None, prefix=''):
1470 losedatafn=None, prefix=''):
1469 '''yields diff of changes to files between two nodes, or node and
1471 '''yields diff of changes to files between two nodes, or node and
1470 working directory.
1472 working directory.
1471
1473
1472 if node1 is None, use first dirstate parent instead.
1474 if node1 is None, use first dirstate parent instead.
1473 if node2 is None, compare node1 with working directory.
1475 if node2 is None, compare node1 with working directory.
1474
1476
1475 losedatafn(**kwarg) is a callable run when opts.upgrade=True and
1477 losedatafn(**kwarg) is a callable run when opts.upgrade=True and
1476 every time some change cannot be represented with the current
1478 every time some change cannot be represented with the current
1477 patch format. Return False to upgrade to git patch format, True to
1479 patch format. Return False to upgrade to git patch format, True to
1478 accept the loss or raise an exception to abort the diff. It is
1480 accept the loss or raise an exception to abort the diff. It is
1479 called with the name of current file being diffed as 'fn'. If set
1481 called with the name of current file being diffed as 'fn'. If set
1480 to None, patches will always be upgraded to git format when
1482 to None, patches will always be upgraded to git format when
1481 necessary.
1483 necessary.
1482
1484
1483 prefix is a filename prefix that is prepended to all filenames on
1485 prefix is a filename prefix that is prepended to all filenames on
1484 display (used for subrepos).
1486 display (used for subrepos).
1485 '''
1487 '''
1486
1488
1487 if opts is None:
1489 if opts is None:
1488 opts = mdiff.defaultopts
1490 opts = mdiff.defaultopts
1489
1491
1490 if not node1 and not node2:
1492 if not node1 and not node2:
1491 node1 = repo.dirstate.p1()
1493 node1 = repo.dirstate.p1()
1492
1494
1493 def lrugetfilectx():
1495 def lrugetfilectx():
1494 cache = {}
1496 cache = {}
1495 order = []
1497 order = []
1496 def getfilectx(f, ctx):
1498 def getfilectx(f, ctx):
1497 fctx = ctx.filectx(f, filelog=cache.get(f))
1499 fctx = ctx.filectx(f, filelog=cache.get(f))
1498 if f not in cache:
1500 if f not in cache:
1499 if len(cache) > 20:
1501 if len(cache) > 20:
1500 del cache[order.pop(0)]
1502 del cache[order.pop(0)]
1501 cache[f] = fctx.filelog()
1503 cache[f] = fctx.filelog()
1502 else:
1504 else:
1503 order.remove(f)
1505 order.remove(f)
1504 order.append(f)
1506 order.append(f)
1505 return fctx
1507 return fctx
1506 return getfilectx
1508 return getfilectx
1507 getfilectx = lrugetfilectx()
1509 getfilectx = lrugetfilectx()
1508
1510
1509 ctx1 = repo[node1]
1511 ctx1 = repo[node1]
1510 ctx2 = repo[node2]
1512 ctx2 = repo[node2]
1511
1513
1512 if not changes:
1514 if not changes:
1513 changes = repo.status(ctx1, ctx2, match=match)
1515 changes = repo.status(ctx1, ctx2, match=match)
1514 modified, added, removed = changes[:3]
1516 modified, added, removed = changes[:3]
1515
1517
1516 if not modified and not added and not removed:
1518 if not modified and not added and not removed:
1517 return []
1519 return []
1518
1520
1519 revs = None
1521 revs = None
1520 if not repo.ui.quiet:
1522 if not repo.ui.quiet:
1521 hexfunc = repo.ui.debugflag and hex or short
1523 hexfunc = repo.ui.debugflag and hex or short
1522 revs = [hexfunc(node) for node in [node1, node2] if node]
1524 revs = [hexfunc(node) for node in [node1, node2] if node]
1523
1525
1524 copy = {}
1526 copy = {}
1525 if opts.git or opts.upgrade:
1527 if opts.git or opts.upgrade:
1526 copy = copies.copies(repo, ctx1, ctx2, repo[nullid])[0]
1528 copy = copies.copies(repo, ctx1, ctx2, repo[nullid])[0]
1527
1529
1528 difffn = lambda opts, losedata: trydiff(repo, revs, ctx1, ctx2,
1530 difffn = lambda opts, losedata: trydiff(repo, revs, ctx1, ctx2,
1529 modified, added, removed, copy, getfilectx, opts, losedata, prefix)
1531 modified, added, removed, copy, getfilectx, opts, losedata, prefix)
1530 if opts.upgrade and not opts.git:
1532 if opts.upgrade and not opts.git:
1531 try:
1533 try:
1532 def losedata(fn):
1534 def losedata(fn):
1533 if not losedatafn or not losedatafn(fn=fn):
1535 if not losedatafn or not losedatafn(fn=fn):
1534 raise GitDiffRequired()
1536 raise GitDiffRequired()
1535 # Buffer the whole output until we are sure it can be generated
1537 # Buffer the whole output until we are sure it can be generated
1536 return list(difffn(opts.copy(git=False), losedata))
1538 return list(difffn(opts.copy(git=False), losedata))
1537 except GitDiffRequired:
1539 except GitDiffRequired:
1538 return difffn(opts.copy(git=True), None)
1540 return difffn(opts.copy(git=True), None)
1539 else:
1541 else:
1540 return difffn(opts, None)
1542 return difffn(opts, None)
1541
1543
1542 def difflabel(func, *args, **kw):
1544 def difflabel(func, *args, **kw):
1543 '''yields 2-tuples of (output, label) based on the output of func()'''
1545 '''yields 2-tuples of (output, label) based on the output of func()'''
1544 prefixes = [('diff', 'diff.diffline'),
1546 prefixes = [('diff', 'diff.diffline'),
1545 ('copy', 'diff.extended'),
1547 ('copy', 'diff.extended'),
1546 ('rename', 'diff.extended'),
1548 ('rename', 'diff.extended'),
1547 ('old', 'diff.extended'),
1549 ('old', 'diff.extended'),
1548 ('new', 'diff.extended'),
1550 ('new', 'diff.extended'),
1549 ('deleted', 'diff.extended'),
1551 ('deleted', 'diff.extended'),
1550 ('---', 'diff.file_a'),
1552 ('---', 'diff.file_a'),
1551 ('+++', 'diff.file_b'),
1553 ('+++', 'diff.file_b'),
1552 ('@@', 'diff.hunk'),
1554 ('@@', 'diff.hunk'),
1553 ('-', 'diff.deleted'),
1555 ('-', 'diff.deleted'),
1554 ('+', 'diff.inserted')]
1556 ('+', 'diff.inserted')]
1555
1557
1556 for chunk in func(*args, **kw):
1558 for chunk in func(*args, **kw):
1557 lines = chunk.split('\n')
1559 lines = chunk.split('\n')
1558 for i, line in enumerate(lines):
1560 for i, line in enumerate(lines):
1559 if i != 0:
1561 if i != 0:
1560 yield ('\n', '')
1562 yield ('\n', '')
1561 stripline = line
1563 stripline = line
1562 if line and line[0] in '+-':
1564 if line and line[0] in '+-':
1563 # highlight trailing whitespace, but only in changed lines
1565 # highlight trailing whitespace, but only in changed lines
1564 stripline = line.rstrip()
1566 stripline = line.rstrip()
1565 for prefix, label in prefixes:
1567 for prefix, label in prefixes:
1566 if stripline.startswith(prefix):
1568 if stripline.startswith(prefix):
1567 yield (stripline, label)
1569 yield (stripline, label)
1568 break
1570 break
1569 else:
1571 else:
1570 yield (line, '')
1572 yield (line, '')
1571 if line != stripline:
1573 if line != stripline:
1572 yield (line[len(stripline):], 'diff.trailingwhitespace')
1574 yield (line[len(stripline):], 'diff.trailingwhitespace')
1573
1575
1574 def diffui(*args, **kw):
1576 def diffui(*args, **kw):
1575 '''like diff(), but yields 2-tuples of (output, label) for ui.write()'''
1577 '''like diff(), but yields 2-tuples of (output, label) for ui.write()'''
1576 return difflabel(diff, *args, **kw)
1578 return difflabel(diff, *args, **kw)
1577
1579
1578
1580
1579 def _addmodehdr(header, omode, nmode):
1581 def _addmodehdr(header, omode, nmode):
1580 if omode != nmode:
1582 if omode != nmode:
1581 header.append('old mode %s\n' % omode)
1583 header.append('old mode %s\n' % omode)
1582 header.append('new mode %s\n' % nmode)
1584 header.append('new mode %s\n' % nmode)
1583
1585
1584 def trydiff(repo, revs, ctx1, ctx2, modified, added, removed,
1586 def trydiff(repo, revs, ctx1, ctx2, modified, added, removed,
1585 copy, getfilectx, opts, losedatafn, prefix):
1587 copy, getfilectx, opts, losedatafn, prefix):
1586
1588
1587 def join(f):
1589 def join(f):
1588 return os.path.join(prefix, f)
1590 return os.path.join(prefix, f)
1589
1591
1590 date1 = util.datestr(ctx1.date())
1592 date1 = util.datestr(ctx1.date())
1591 man1 = ctx1.manifest()
1593 man1 = ctx1.manifest()
1592
1594
1593 gone = set()
1595 gone = set()
1594 gitmode = {'l': '120000', 'x': '100755', '': '100644'}
1596 gitmode = {'l': '120000', 'x': '100755', '': '100644'}
1595
1597
1596 copyto = dict([(v, k) for k, v in copy.items()])
1598 copyto = dict([(v, k) for k, v in copy.items()])
1597
1599
1598 if opts.git:
1600 if opts.git:
1599 revs = None
1601 revs = None
1600
1602
1601 for f in sorted(modified + added + removed):
1603 for f in sorted(modified + added + removed):
1602 to = None
1604 to = None
1603 tn = None
1605 tn = None
1604 dodiff = True
1606 dodiff = True
1605 header = []
1607 header = []
1606 if f in man1:
1608 if f in man1:
1607 to = getfilectx(f, ctx1).data()
1609 to = getfilectx(f, ctx1).data()
1608 if f not in removed:
1610 if f not in removed:
1609 tn = getfilectx(f, ctx2).data()
1611 tn = getfilectx(f, ctx2).data()
1610 a, b = f, f
1612 a, b = f, f
1611 if opts.git or losedatafn:
1613 if opts.git or losedatafn:
1612 if f in added:
1614 if f in added:
1613 mode = gitmode[ctx2.flags(f)]
1615 mode = gitmode[ctx2.flags(f)]
1614 if f in copy or f in copyto:
1616 if f in copy or f in copyto:
1615 if opts.git:
1617 if opts.git:
1616 if f in copy:
1618 if f in copy:
1617 a = copy[f]
1619 a = copy[f]
1618 else:
1620 else:
1619 a = copyto[f]
1621 a = copyto[f]
1620 omode = gitmode[man1.flags(a)]
1622 omode = gitmode[man1.flags(a)]
1621 _addmodehdr(header, omode, mode)
1623 _addmodehdr(header, omode, mode)
1622 if a in removed and a not in gone:
1624 if a in removed and a not in gone:
1623 op = 'rename'
1625 op = 'rename'
1624 gone.add(a)
1626 gone.add(a)
1625 else:
1627 else:
1626 op = 'copy'
1628 op = 'copy'
1627 header.append('%s from %s\n' % (op, join(a)))
1629 header.append('%s from %s\n' % (op, join(a)))
1628 header.append('%s to %s\n' % (op, join(f)))
1630 header.append('%s to %s\n' % (op, join(f)))
1629 to = getfilectx(a, ctx1).data()
1631 to = getfilectx(a, ctx1).data()
1630 else:
1632 else:
1631 losedatafn(f)
1633 losedatafn(f)
1632 else:
1634 else:
1633 if opts.git:
1635 if opts.git:
1634 header.append('new file mode %s\n' % mode)
1636 header.append('new file mode %s\n' % mode)
1635 elif ctx2.flags(f):
1637 elif ctx2.flags(f):
1636 losedatafn(f)
1638 losedatafn(f)
1637 # In theory, if tn was copied or renamed we should check
1639 # In theory, if tn was copied or renamed we should check
1638 # if the source is binary too but the copy record already
1640 # if the source is binary too but the copy record already
1639 # forces git mode.
1641 # forces git mode.
1640 if util.binary(tn):
1642 if util.binary(tn):
1641 if opts.git:
1643 if opts.git:
1642 dodiff = 'binary'
1644 dodiff = 'binary'
1643 else:
1645 else:
1644 losedatafn(f)
1646 losedatafn(f)
1645 if not opts.git and not tn:
1647 if not opts.git and not tn:
1646 # regular diffs cannot represent new empty file
1648 # regular diffs cannot represent new empty file
1647 losedatafn(f)
1649 losedatafn(f)
1648 elif f in removed:
1650 elif f in removed:
1649 if opts.git:
1651 if opts.git:
1650 # have we already reported a copy above?
1652 # have we already reported a copy above?
1651 if ((f in copy and copy[f] in added
1653 if ((f in copy and copy[f] in added
1652 and copyto[copy[f]] == f) or
1654 and copyto[copy[f]] == f) or
1653 (f in copyto and copyto[f] in added
1655 (f in copyto and copyto[f] in added
1654 and copy[copyto[f]] == f)):
1656 and copy[copyto[f]] == f)):
1655 dodiff = False
1657 dodiff = False
1656 else:
1658 else:
1657 header.append('deleted file mode %s\n' %
1659 header.append('deleted file mode %s\n' %
1658 gitmode[man1.flags(f)])
1660 gitmode[man1.flags(f)])
1659 elif not to or util.binary(to):
1661 elif not to or util.binary(to):
1660 # regular diffs cannot represent empty file deletion
1662 # regular diffs cannot represent empty file deletion
1661 losedatafn(f)
1663 losedatafn(f)
1662 else:
1664 else:
1663 oflag = man1.flags(f)
1665 oflag = man1.flags(f)
1664 nflag = ctx2.flags(f)
1666 nflag = ctx2.flags(f)
1665 binary = util.binary(to) or util.binary(tn)
1667 binary = util.binary(to) or util.binary(tn)
1666 if opts.git:
1668 if opts.git:
1667 _addmodehdr(header, gitmode[oflag], gitmode[nflag])
1669 _addmodehdr(header, gitmode[oflag], gitmode[nflag])
1668 if binary:
1670 if binary:
1669 dodiff = 'binary'
1671 dodiff = 'binary'
1670 elif binary or nflag != oflag:
1672 elif binary or nflag != oflag:
1671 losedatafn(f)
1673 losedatafn(f)
1672 if opts.git:
1674 if opts.git:
1673 header.insert(0, mdiff.diffline(revs, join(a), join(b), opts))
1675 header.insert(0, mdiff.diffline(revs, join(a), join(b), opts))
1674
1676
1675 if dodiff:
1677 if dodiff:
1676 if dodiff == 'binary':
1678 if dodiff == 'binary':
1677 text = b85diff(to, tn)
1679 text = b85diff(to, tn)
1678 else:
1680 else:
1679 text = mdiff.unidiff(to, date1,
1681 text = mdiff.unidiff(to, date1,
1680 # ctx2 date may be dynamic
1682 # ctx2 date may be dynamic
1681 tn, util.datestr(ctx2.date()),
1683 tn, util.datestr(ctx2.date()),
1682 join(a), join(b), revs, opts=opts)
1684 join(a), join(b), revs, opts=opts)
1683 if header and (text or len(header) > 1):
1685 if header and (text or len(header) > 1):
1684 yield ''.join(header)
1686 yield ''.join(header)
1685 if text:
1687 if text:
1686 yield text
1688 yield text
1687
1689
1688 def diffstatsum(stats):
1690 def diffstatsum(stats):
1689 maxfile, maxtotal, addtotal, removetotal, binary = 0, 0, 0, 0, False
1691 maxfile, maxtotal, addtotal, removetotal, binary = 0, 0, 0, 0, False
1690 for f, a, r, b in stats:
1692 for f, a, r, b in stats:
1691 maxfile = max(maxfile, encoding.colwidth(f))
1693 maxfile = max(maxfile, encoding.colwidth(f))
1692 maxtotal = max(maxtotal, a + r)
1694 maxtotal = max(maxtotal, a + r)
1693 addtotal += a
1695 addtotal += a
1694 removetotal += r
1696 removetotal += r
1695 binary = binary or b
1697 binary = binary or b
1696
1698
1697 return maxfile, maxtotal, addtotal, removetotal, binary
1699 return maxfile, maxtotal, addtotal, removetotal, binary
1698
1700
1699 def diffstatdata(lines):
1701 def diffstatdata(lines):
1700 diffre = re.compile('^diff .*-r [a-z0-9]+\s(.*)$')
1702 diffre = re.compile('^diff .*-r [a-z0-9]+\s(.*)$')
1701
1703
1702 results = []
1704 results = []
1703 filename, adds, removes = None, 0, 0
1705 filename, adds, removes = None, 0, 0
1704
1706
1705 def addresult():
1707 def addresult():
1706 if filename:
1708 if filename:
1707 isbinary = adds == 0 and removes == 0
1709 isbinary = adds == 0 and removes == 0
1708 results.append((filename, adds, removes, isbinary))
1710 results.append((filename, adds, removes, isbinary))
1709
1711
1710 for line in lines:
1712 for line in lines:
1711 if line.startswith('diff'):
1713 if line.startswith('diff'):
1712 addresult()
1714 addresult()
1713 # set numbers to 0 anyway when starting new file
1715 # set numbers to 0 anyway when starting new file
1714 adds, removes = 0, 0
1716 adds, removes = 0, 0
1715 if line.startswith('diff --git'):
1717 if line.startswith('diff --git'):
1716 filename = gitre.search(line).group(1)
1718 filename = gitre.search(line).group(1)
1717 elif line.startswith('diff -r'):
1719 elif line.startswith('diff -r'):
1718 # format: "diff -r ... -r ... filename"
1720 # format: "diff -r ... -r ... filename"
1719 filename = diffre.search(line).group(1)
1721 filename = diffre.search(line).group(1)
1720 elif line.startswith('+') and not line.startswith('+++'):
1722 elif line.startswith('+') and not line.startswith('+++'):
1721 adds += 1
1723 adds += 1
1722 elif line.startswith('-') and not line.startswith('---'):
1724 elif line.startswith('-') and not line.startswith('---'):
1723 removes += 1
1725 removes += 1
1724 addresult()
1726 addresult()
1725 return results
1727 return results
1726
1728
1727 def diffstat(lines, width=80, git=False):
1729 def diffstat(lines, width=80, git=False):
1728 output = []
1730 output = []
1729 stats = diffstatdata(lines)
1731 stats = diffstatdata(lines)
1730 maxname, maxtotal, totaladds, totalremoves, hasbinary = diffstatsum(stats)
1732 maxname, maxtotal, totaladds, totalremoves, hasbinary = diffstatsum(stats)
1731
1733
1732 countwidth = len(str(maxtotal))
1734 countwidth = len(str(maxtotal))
1733 if hasbinary and countwidth < 3:
1735 if hasbinary and countwidth < 3:
1734 countwidth = 3
1736 countwidth = 3
1735 graphwidth = width - countwidth - maxname - 6
1737 graphwidth = width - countwidth - maxname - 6
1736 if graphwidth < 10:
1738 if graphwidth < 10:
1737 graphwidth = 10
1739 graphwidth = 10
1738
1740
1739 def scale(i):
1741 def scale(i):
1740 if maxtotal <= graphwidth:
1742 if maxtotal <= graphwidth:
1741 return i
1743 return i
1742 # If diffstat runs out of room it doesn't print anything,
1744 # If diffstat runs out of room it doesn't print anything,
1743 # which isn't very useful, so always print at least one + or -
1745 # which isn't very useful, so always print at least one + or -
1744 # if there were at least some changes.
1746 # if there were at least some changes.
1745 return max(i * graphwidth // maxtotal, int(bool(i)))
1747 return max(i * graphwidth // maxtotal, int(bool(i)))
1746
1748
1747 for filename, adds, removes, isbinary in stats:
1749 for filename, adds, removes, isbinary in stats:
1748 if git and isbinary:
1750 if git and isbinary:
1749 count = 'Bin'
1751 count = 'Bin'
1750 else:
1752 else:
1751 count = adds + removes
1753 count = adds + removes
1752 pluses = '+' * scale(adds)
1754 pluses = '+' * scale(adds)
1753 minuses = '-' * scale(removes)
1755 minuses = '-' * scale(removes)
1754 output.append(' %s%s | %*s %s%s\n' %
1756 output.append(' %s%s | %*s %s%s\n' %
1755 (filename, ' ' * (maxname - encoding.colwidth(filename)),
1757 (filename, ' ' * (maxname - encoding.colwidth(filename)),
1756 countwidth, count, pluses, minuses))
1758 countwidth, count, pluses, minuses))
1757
1759
1758 if stats:
1760 if stats:
1759 output.append(_(' %d files changed, %d insertions(+), %d deletions(-)\n')
1761 output.append(_(' %d files changed, %d insertions(+), %d deletions(-)\n')
1760 % (len(stats), totaladds, totalremoves))
1762 % (len(stats), totaladds, totalremoves))
1761
1763
1762 return ''.join(output)
1764 return ''.join(output)
1763
1765
1764 def diffstatui(*args, **kw):
1766 def diffstatui(*args, **kw):
1765 '''like diffstat(), but yields 2-tuples of (output, label) for
1767 '''like diffstat(), but yields 2-tuples of (output, label) for
1766 ui.write()
1768 ui.write()
1767 '''
1769 '''
1768
1770
1769 for line in diffstat(*args, **kw).splitlines():
1771 for line in diffstat(*args, **kw).splitlines():
1770 if line and line[-1] in '+-':
1772 if line and line[-1] in '+-':
1771 name, graph = line.rsplit(' ', 1)
1773 name, graph = line.rsplit(' ', 1)
1772 yield (name + ' ', '')
1774 yield (name + ' ', '')
1773 m = re.search(r'\++', graph)
1775 m = re.search(r'\++', graph)
1774 if m:
1776 if m:
1775 yield (m.group(0), 'diffstat.inserted')
1777 yield (m.group(0), 'diffstat.inserted')
1776 m = re.search(r'-+', graph)
1778 m = re.search(r'-+', graph)
1777 if m:
1779 if m:
1778 yield (m.group(0), 'diffstat.deleted')
1780 yield (m.group(0), 'diffstat.deleted')
1779 else:
1781 else:
1780 yield (line, '')
1782 yield (line, '')
1781 yield ('\n', '')
1783 yield ('\n', '')
General Comments 0
You need to be logged in to leave comments. Login now