##// END OF EJS Templates
wireproto: add streams to frame-based protocol...
wireproto: add streams to frame-based protocol Previously, the frame-based protocol was just a series of frames, with each frame associated with a request ID. In order to scale the protocol, we'll want to enable the use of compression. While it is possible to enable compression at the socket/pipe level, this has its disadvantages. The big one is it undermines the point of frames being standalone, atomic units that can be read and written: if you add compression above the framing protocol, you are back to having a stream-based protocol as opposed to something frame-based. So in order to preserve frames, compression needs to occur at the frame payload level. Compressing each frame's payload individually will limit compression ratios because the window size of the compressor will be limited by the max frame size, which is 32-64kb as currently defined. It will also add CPU overhead, as it is more efficient for compressors to operate on fewer, larger blocks of data than more, smaller blocks. So compressing each frame independently is out. This means we need to compress each frame's payload as if it is part of a larger stream. The simplest approach is to have 1 stream per connection. This could certainly work. However, it has disadvantages (documented below). We could also have 1 stream per RPC/command invocation. (This is the model HTTP/2 goes with.) This also has disadvantages. The main disadvantage to one global stream is that it has the very real potential to create CPU bottlenecks doing compression. Networks are only getting faster and the performance of single CPU cores has been relatively flat. Newer compression formats like zstandard offer better CPU cycle efficiency than predecessors like zlib. But it still all too common to saturate your CPU with compression overhead long before you saturate the network pipe. The main disadvantage with streams per request is that you can't reap the benefits of the compression context for multiple requests. For example, if you send 1000 RPC requests (or HTTP/2 requests for that matter), the response to each would have its own compression context. The overall size of the raw responses would be larger because compression contexts wouldn't be able to reference data from another request or response. The approach for streams as implemented in this commit is to support N streams per connection and for streams to potentially span requests and responses. As explained by the added internals docs, this facilitates servers and clients delegating independent streams and compression to independent threads / CPU cores. This helps alleviate the CPU bottleneck of compression. This design also allows compression contexts to be reused across requests/responses. This can result in improved compression ratios and less overhead for compressors and decompressors having to build new contexts. Another feature that was defined was the ability for individual frames within a stream to declare whether that individual frame's payload uses the content encoding (read: compression) defined by the stream. The idea here is that some servers may serve data from a combination of caches and dynamic resolution. Data coming from caches may be pre-compressed. We want to facilitate servers being able to essentially stream bytes from caches to the wire with minimal overhead. Being able to mix and match with frames are compressed within a stream enables these types of advanced server functionality. This commit defines the new streams mechanism. Basic code for supporting streams in frames has been added. But that code is seriously lacking and doesn't fully conform to the defined protocol. For example, we don't close any streams. And support for content encoding within streams is not yet implemented. The change was rather invasive and I didn't think it would be reasonable to implement the entire feature in a single commit. For the record, I would have loved to reuse an existing multiplexing protocol to build the new wire protocol on top of. However, I couldn't find a protocol that offers the performance and scaling characteristics that I desired. Namely, it should support multiple compression contexts to facilitate scaling out to multiple CPU cores and compression contexts should be able to live longer than single RPC requests. HTTP/2 *almost* fits the bill. But the semantics of HTTP message exchange state that streams can only live for a single request-response. We /could/ tunnel on top of HTTP/2 streams and frames with HEADER and DATA frames. But there's no guarantee that HTTP/2 libraries and proxies would allow us to use HTTP/2 streams and frames without the HTTP message exchange semantics defined in RFC 7540 Section 8. Other RPC protocols like gRPC tunnel are built on top of HTTP/2 and thus preserve its semantics of stream per RPC invocation. Even QUIC does this. We could attempt to invent a higher-level stream that spans HTTP/2 streams. But this would be violating HTTP/2 because there is no guarantee that HTTP/2 streams are routed to the same server. The best we can do - which is what this protocol does - is shoehorn all request and response data into a single HTTP message and create streams within. At that point, we've defined a Content-Type in HTTP parlance. It just so happens our media type can also work as a standalone, stream-based protocol, without leaning on HTTP or similar protocol. Differential Revision: https://phab.mercurial-scm.org/D2907

File last commit:

r34950:ff178743 stable
r37304:9bfcbe4f default
Show More
merge-tools.txt
85 lines | 3.6 KiB | text/plain | TextLexer
To merge files Mercurial uses merge tools.
A merge tool combines two different versions of a file into a merged
file. Merge tools are given the two files and the greatest common
ancestor of the two file versions, so they can determine the changes
made on both branches.
Merge tools are used both for :hg:`resolve`, :hg:`merge`, :hg:`update`,
:hg:`backout` and in several extensions.
Usually, the merge tool tries to automatically reconcile the files by
combining all non-overlapping changes that occurred separately in
the two different evolutions of the same initial base file. Furthermore, some
interactive merge programs make it easier to manually resolve
conflicting merges, either in a graphical way, or by inserting some
conflict markers. Mercurial does not include any interactive merge
programs but relies on external tools for that.
Available merge tools
=====================
External merge tools and their properties are configured in the
merge-tools configuration section - see hgrc(5) - but they can often just
be named by their executable.
A merge tool is generally usable if its executable can be found on the
system and if it can handle the merge. The executable is found if it
is an absolute or relative executable path or the name of an
application in the executable search path. The tool is assumed to be
able to handle the merge if it can handle symlinks if the file is a
symlink, if it can handle binary files if the file is binary, and if a
GUI is available if the tool requires a GUI.
There are some internal merge tools which can be used. The internal
merge tools are:
.. internaltoolsmarker
Internal tools are always available and do not require a GUI but will by default
not handle symlinks or binary files.
Choosing a merge tool
=====================
Mercurial uses these rules when deciding which merge tool to use:
1. If a tool has been specified with the --tool option to merge or resolve, it
is used. If it is the name of a tool in the merge-tools configuration, its
configuration is used. Otherwise the specified tool must be executable by
the shell.
2. If the ``HGMERGE`` environment variable is present, its value is used and
must be executable by the shell.
3. If the filename of the file to be merged matches any of the patterns in the
merge-patterns configuration section, the first usable merge tool
corresponding to a matching pattern is used. Here, binary capabilities of the
merge tool are not considered.
4. If ui.merge is set it will be considered next. If the value is not the name
of a configured tool, the specified value is used and must be executable by
the shell. Otherwise the named tool is used if it is usable.
5. If any usable merge tools are present in the merge-tools configuration
section, the one with the highest priority is used.
6. If a program named ``hgmerge`` can be found on the system, it is used - but
it will by default not be used for symlinks and binary files.
7. If the file to be merged is not binary and is not a symlink, then
internal ``:merge`` is used.
8. Otherwise, ``:prompt`` is used.
.. note::
After selecting a merge program, Mercurial will by default attempt
to merge the files using a simple merge algorithm first. Only if it doesn't
succeed because of conflicting changes will Mercurial actually execute the
merge program. Whether to use the simple merge algorithm first can be
controlled by the premerge setting of the merge tool. Premerge is enabled by
default unless the file is binary or a symlink.
See the merge-tools and ui sections of hgrc(5) for details on the
configuration of merge tools.