diff --git a/.hgignore b/.hgignore
new file mode 100644
--- /dev/null
+++ b/.hgignore
@@ -0,0 +1,50 @@
+syntax: glob
+*.orig
+*.pyc
+*.swp
+*.sqlite
+*.sqlite-journal
+*.tox
+*.egg-info
+*.egg
+*.idea
+.DS_Store*
+
+
+syntax: regexp
+
+#.filename
+^\.settings$
+^\.project$
+^\.pydevproject$
+^\.coverage$
+^\.cache.*$
+^\.rhodecode$
+
+^rcextensions
+^_dev
+^._dev
+^build/
+^coverage\.xml$
+^data$
+^dev.ini$
+^acceptance_tests/dev.*\.ini$
+^dist/
+^fabfile.py
+^htmlcov
+^junit\.xml$
+^node_modules/
+^pylint.log$
+^rcextensions/
+^rhodecode/public/css/style.css$
+^rhodecode/public/js/scripts.js$
+^rhodecode\.db$
+^rhodecode\.log$
+^rhodecode_dev\.log$
+^test\.db$
+^build$
+^coverage\.xml$
+^htmlcov
+^junit\.xml$
+^pylint.log$
+^result$
diff --git a/LICENSE.txt b/LICENSE.txt
new file mode 100644
--- /dev/null
+++ b/LICENSE.txt
@@ -0,0 +1,674 @@
+ GNU GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+ The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works. By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users. We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors. You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+ To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights. Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received. You must make sure that they, too, receive
+or can get the source code. And you must show them these terms so they
+know their rights.
+
+ Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+ For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software. For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+ Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so. This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software. The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable. Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products. If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+ Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary. To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS
+
+ 0. Definitions.
+
+ "This License" refers to version 3 of the GNU General Public License.
+
+ "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+ "The Program" refers to any copyrightable work licensed under this
+License. Each licensee is addressed as "you". "Licensees" and
+"recipients" may be individuals or organizations.
+
+ To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy. The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+ A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+ To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy. Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+ To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies. Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+ An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License. If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+ 1. Source Code.
+
+ The "source code" for a work means the preferred form of the work
+for making modifications to it. "Object code" means any non-source
+form of a work.
+
+ A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+ The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form. A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+ The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities. However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work. For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+ The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+ The Corresponding Source for a work in source code form is that
+same work.
+
+ 2. Basic Permissions.
+
+ All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met. This License explicitly affirms your unlimited
+permission to run the unmodified Program. The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work. This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+ You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force. You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright. Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+ Conveying under any other circumstances is permitted solely under
+the conditions stated below. Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+ No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+ When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+ 4. Conveying Verbatim Copies.
+
+ You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+ You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+ 5. Conveying Modified Source Versions.
+
+ You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+ a) The work must carry prominent notices stating that you modified
+ it, and giving a relevant date.
+
+ b) The work must carry prominent notices stating that it is
+ released under this License and any conditions added under section
+ 7. This requirement modifies the requirement in section 4 to
+ "keep intact all notices".
+
+ c) You must license the entire work, as a whole, under this
+ License to anyone who comes into possession of a copy. This
+ License will therefore apply, along with any applicable section 7
+ additional terms, to the whole of the work, and all its parts,
+ regardless of how they are packaged. This License gives no
+ permission to license the work in any other way, but it does not
+ invalidate such permission if you have separately received it.
+
+ d) If the work has interactive user interfaces, each must display
+ Appropriate Legal Notices; however, if the Program has interactive
+ interfaces that do not display Appropriate Legal Notices, your
+ work need not make them do so.
+
+ A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit. Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+ 6. Conveying Non-Source Forms.
+
+ You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+ a) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by the
+ Corresponding Source fixed on a durable physical medium
+ customarily used for software interchange.
+
+ b) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by a
+ written offer, valid for at least three years and valid for as
+ long as you offer spare parts or customer support for that product
+ model, to give anyone who possesses the object code either (1) a
+ copy of the Corresponding Source for all the software in the
+ product that is covered by this License, on a durable physical
+ medium customarily used for software interchange, for a price no
+ more than your reasonable cost of physically performing this
+ conveying of source, or (2) access to copy the
+ Corresponding Source from a network server at no charge.
+
+ c) Convey individual copies of the object code with a copy of the
+ written offer to provide the Corresponding Source. This
+ alternative is allowed only occasionally and noncommercially, and
+ only if you received the object code with such an offer, in accord
+ with subsection 6b.
+
+ d) Convey the object code by offering access from a designated
+ place (gratis or for a charge), and offer equivalent access to the
+ Corresponding Source in the same way through the same place at no
+ further charge. You need not require recipients to copy the
+ Corresponding Source along with the object code. If the place to
+ copy the object code is a network server, the Corresponding Source
+ may be on a different server (operated by you or a third party)
+ that supports equivalent copying facilities, provided you maintain
+ clear directions next to the object code saying where to find the
+ Corresponding Source. Regardless of what server hosts the
+ Corresponding Source, you remain obligated to ensure that it is
+ available for as long as needed to satisfy these requirements.
+
+ e) Convey the object code using peer-to-peer transmission, provided
+ you inform other peers where the object code and Corresponding
+ Source of the work are being offered to the general public at no
+ charge under subsection 6d.
+
+ A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+ A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling. In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage. For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product. A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+ "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source. The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+ If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information. But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+ The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed. Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+ Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+ 7. Additional Terms.
+
+ "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law. If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+ When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it. (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.) You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+ Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+ a) Disclaiming warranty or limiting liability differently from the
+ terms of sections 15 and 16 of this License; or
+
+ b) Requiring preservation of specified reasonable legal notices or
+ author attributions in that material or in the Appropriate Legal
+ Notices displayed by works containing it; or
+
+ c) Prohibiting misrepresentation of the origin of that material, or
+ requiring that modified versions of such material be marked in
+ reasonable ways as different from the original version; or
+
+ d) Limiting the use for publicity purposes of names of licensors or
+ authors of the material; or
+
+ e) Declining to grant rights under trademark law for use of some
+ trade names, trademarks, or service marks; or
+
+ f) Requiring indemnification of licensors and authors of that
+ material by anyone who conveys the material (or modified versions of
+ it) with contractual assumptions of liability to the recipient, for
+ any liability that these contractual assumptions directly impose on
+ those licensors and authors.
+
+ All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10. If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term. If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+ If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+ Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+ 8. Termination.
+
+ You may not propagate or modify a covered work except as expressly
+provided under this License. Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+ However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+ Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+ 9. Acceptance Not Required for Having Copies.
+
+ You are not required to accept this License in order to receive or
+run a copy of the Program. Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance. However,
+nothing other than this License grants you permission to propagate or
+modify any covered work. These actions infringe copyright if you do
+not accept this License. Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+ 10. Automatic Licensing of Downstream Recipients.
+
+ Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License. You are not responsible
+for enforcing compliance by third parties with this License.
+
+ An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations. If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+ You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License. For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+ 11. Patents.
+
+ A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based. The
+work thus licensed is called the contributor's "contributor version".
+
+ A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version. For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+ In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement). To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+ If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients. "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+ If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+ A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License. You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+ Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+ 12. No Surrender of Others' Freedom.
+
+ If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all. For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+ 13. Use with the GNU Affero General Public License.
+
+ Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work. The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+ 14. Revised Versions of this License.
+
+ The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation. If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+ If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+ Later license versions may give you additional or different
+permissions. However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+ 15. Disclaimer of Warranty.
+
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. Limitation of Liability.
+
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+ 17. Interpretation of Sections 15 and 16.
+
+ If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+
+ Copyright (C)
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+
+Also add information on how to contact you by electronic and paper mail.
+
+ If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+ Copyright (C)
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+ You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+.
+
+ The GNU General Public License does not permit incorporating your program
+into proprietary programs. If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License. But first, please read
+.
diff --git a/README.rst b/README.rst
new file mode 100644
--- /dev/null
+++ b/README.rst
@@ -0,0 +1,13 @@
+
+
+===========
+ vcsserver
+===========
+
+Contains the package `vcsserver`.
+
+It provides a server to allow remote access to various version control backend
+system.
+
+Intention is that this package can be run independent of RhodeCode Enterprise or
+any other non-open packages.
diff --git a/default.nix b/default.nix
new file mode 100644
--- /dev/null
+++ b/default.nix
@@ -0,0 +1,139 @@
+# Nix environment for the community edition
+#
+# This shall be as lean as possible, just producing the rhodecode-vcsserver
+# derivation. For advanced tweaks to pimp up the development environment we use
+# "shell.nix" so that it does not have to clutter this file.
+
+{ pkgs ? (import {})
+, pythonPackages ? "python27Packages"
+, pythonExternalOverrides ? self: super: {}
+, doCheck ? true
+}:
+
+let pkgs_ = pkgs; in
+
+let
+ pkgs = pkgs_.overridePackages (self: super: {
+ # Override subversion derivation to
+ # - activate python bindings
+ # - set version to 1.8
+ subversion = super.subversion18.override {
+ httpSupport = true;
+ pythonBindings = true;
+ python = self.python27Packages.python;
+ };
+ });
+
+ inherit (pkgs.lib) fix extends;
+
+ basePythonPackages = with builtins; if isAttrs pythonPackages
+ then pythonPackages
+ else getAttr pythonPackages pkgs;
+
+ elem = builtins.elem;
+ basename = path: with pkgs.lib; last (splitString "/" path);
+ startsWith = prefix: full: let
+ actualPrefix = builtins.substring 0 (builtins.stringLength prefix) full;
+ in actualPrefix == prefix;
+
+ src-filter = path: type: with pkgs.lib;
+ let
+ ext = last (splitString "." path);
+ in
+ !elem (basename path) [
+ ".git" ".hg" "__pycache__" ".eggs" "node_modules"
+ "build" "data" "tmp"] &&
+ !elem ext ["egg-info" "pyc"] &&
+ !startsWith "result" path;
+
+ rhodecode-vcsserver-src = builtins.filterSource src-filter ./.;
+
+ pythonGeneratedPackages = self: basePythonPackages.override (a: {
+ inherit self;
+ })
+ // (scopedImport {
+ self = self;
+ super = basePythonPackages;
+ inherit pkgs;
+ inherit (pkgs) fetchurl fetchgit;
+ } ./pkgs/python-packages.nix);
+
+ pythonOverrides = import ./pkgs/python-packages-overrides.nix {
+ inherit
+ basePythonPackages
+ pkgs;
+ };
+
+ pythonLocalOverrides = self: super: {
+ rhodecode-vcsserver = super.rhodecode-vcsserver.override (attrs: {
+ src = rhodecode-vcsserver-src;
+ inherit doCheck;
+
+ propagatedBuildInputs = attrs.propagatedBuildInputs ++ ([
+ pkgs.git
+ pkgs.subversion
+ ]);
+
+ # TODO: johbo: Make a nicer way to expose the parts. Maybe
+ # pkgs/default.nix?
+ passthru = {
+ pythonPackages = self;
+ };
+
+ # Somewhat snappier setup of the development environment
+ # TODO: move into shell.nix
+ # TODO: think of supporting a stable path again, so that multiple shells
+ # can share it.
+ shellHook = ''
+ # Set locale
+ export LC_ALL="en_US.UTF-8"
+
+ tmp_path=$(mktemp -d)
+ export PATH="$tmp_path/bin:$PATH"
+ export PYTHONPATH="$tmp_path/${self.python.sitePackages}:$PYTHONPATH"
+ mkdir -p $tmp_path/${self.python.sitePackages}
+ python setup.py develop --prefix $tmp_path --allow-hosts ""
+ '';
+
+ # Add VCSServer bin directory to path so that tests can find 'vcsserver'.
+ preCheck = ''
+ export PATH="$out/bin:$PATH"
+ '';
+
+ postInstall = ''
+ echo "Writing meta information for rccontrol to nix-support/rccontrol"
+ mkdir -p $out/nix-support/rccontrol
+ cp -v vcsserver/VERSION $out/nix-support/rccontrol/version
+ echo "DONE: Meta information for rccontrol written"
+
+ ln -s ${self.pyramid}/bin/* $out/bin #*/
+ ln -s ${self.gunicorn}/bin/gunicorn $out/bin/
+
+ # Symlink version control utilities
+ #
+ # We ensure that always the correct version is available as a symlink.
+ # So that users calling them via the profile path will always use the
+ # correct version.
+ ln -s ${pkgs.git}/bin/git $out/bin
+ ln -s ${self.mercurial}/bin/hg $out/bin
+ ln -s ${pkgs.subversion}/bin/svn* $out/bin
+
+ for file in $out/bin/*; do #*/
+ wrapProgram $file \
+ --prefix PYTHONPATH : $PYTHONPATH \
+ --set PYTHONHASHSEED random
+ done
+ '';
+
+ });
+ };
+
+ # Apply all overrides and fix the final package set
+ myPythonPackages =
+ (fix
+ (extends pythonExternalOverrides
+ (extends pythonLocalOverrides
+ (extends pythonOverrides
+ pythonGeneratedPackages))));
+
+in myPythonPackages.rhodecode-vcsserver
diff --git a/development.ini b/development.ini
new file mode 100644
--- /dev/null
+++ b/development.ini
@@ -0,0 +1,93 @@
+################################################################################
+# RhodeCode VCSServer - configuration #
+# #
+################################################################################
+
+[DEFAULT]
+host = 127.0.0.1
+port = 9900
+locale = en_US.UTF-8
+# number of worker threads, this should be set based on a formula threadpool=N*6
+# where N is number of RhodeCode Enterprise workers, eg. running 2 instances
+# 8 gunicorn workers each would be 2 * 8 * 6 = 96, threadpool_size = 96
+threadpool_size = 96
+timeout = 0
+
+# cache regions, please don't change
+beaker.cache.regions = repo_object
+beaker.cache.repo_object.type = memorylru
+beaker.cache.repo_object.max_items = 100
+# cache auto-expires after N seconds
+beaker.cache.repo_object.expire = 300
+beaker.cache.repo_object.enabled = true
+
+
+################################
+### LOGGING CONFIGURATION ####
+################################
+[loggers]
+keys = root, vcsserver, pyro4, beaker
+
+[handlers]
+keys = console
+
+[formatters]
+keys = generic
+
+#############
+## LOGGERS ##
+#############
+[logger_root]
+level = NOTSET
+handlers = console
+
+[logger_vcsserver]
+level = DEBUG
+handlers =
+qualname = vcsserver
+propagate = 1
+
+[logger_beaker]
+level = DEBUG
+handlers =
+qualname = beaker
+propagate = 1
+
+[logger_pyro4]
+level = DEBUG
+handlers =
+qualname = Pyro4
+propagate = 1
+
+
+##############
+## HANDLERS ##
+##############
+
+[handler_console]
+class = StreamHandler
+args = (sys.stderr,)
+level = DEBUG
+formatter = generic
+
+[handler_file]
+class = FileHandler
+args = ('vcsserver.log', 'a',)
+level = DEBUG
+formatter = generic
+
+[handler_file_rotating]
+class = logging.handlers.TimedRotatingFileHandler
+# 'D', 5 - rotate every 5days
+# you can set 'h', 'midnight'
+args = ('vcsserver.log', 'D', 5, 10,)
+level = DEBUG
+formatter = generic
+
+################
+## FORMATTERS ##
+################
+
+[formatter_generic]
+format = %(asctime)s.%(msecs)03d %(levelname)-5.5s [%(name)s] %(message)s
+datefmt = %Y-%m-%d %H:%M:%S
diff --git a/development_pyramid.ini b/development_pyramid.ini
new file mode 100644
--- /dev/null
+++ b/development_pyramid.ini
@@ -0,0 +1,70 @@
+[app:main]
+use = egg:rhodecode-vcsserver
+pyramid.reload_templates = true
+pyramid.default_locale_name = en
+pyramid.includes =
+# cache regions, please don't change
+beaker.cache.regions = repo_object
+beaker.cache.repo_object.type = memorylru
+beaker.cache.repo_object.max_items = 100
+# cache auto-expires after N seconds
+beaker.cache.repo_object.expire = 300
+beaker.cache.repo_object.enabled = true
+locale = en_US.UTF-8
+
+
+[server:main]
+use = egg:waitress#main
+host = 0.0.0.0
+port = %(http_port)s
+
+
+################################
+### LOGGING CONFIGURATION ####
+################################
+[loggers]
+keys = root, vcsserver, beaker
+
+[handlers]
+keys = console
+
+[formatters]
+keys = generic
+
+#############
+## LOGGERS ##
+#############
+[logger_root]
+level = NOTSET
+handlers = console
+
+[logger_vcsserver]
+level = DEBUG
+handlers =
+qualname = vcsserver
+propagate = 1
+
+[logger_beaker]
+level = DEBUG
+handlers =
+qualname = beaker
+propagate = 1
+
+
+##############
+## HANDLERS ##
+##############
+
+[handler_console]
+class = StreamHandler
+args = (sys.stderr,)
+level = DEBUG
+formatter = generic
+
+################
+## FORMATTERS ##
+################
+
+[formatter_generic]
+format = %(asctime)s.%(msecs)03d %(levelname)-5.5s [%(name)s] %(message)s
+datefmt = %Y-%m-%d %H:%M:%S
diff --git a/pip2nix.ini b/pip2nix.ini
new file mode 100644
--- /dev/null
+++ b/pip2nix.ini
@@ -0,0 +1,3 @@
+[pip2nix]
+requirements = ., -r ./requirements.txt
+output = ./pkgs/python-packages.nix
diff --git a/pkgs/python-packages-overrides.nix b/pkgs/python-packages-overrides.nix
new file mode 100644
--- /dev/null
+++ b/pkgs/python-packages-overrides.nix
@@ -0,0 +1,56 @@
+# Overrides for the generated python-packages.nix
+#
+# This function is intended to be used as an extension to the generated file
+# python-packages.nix. The main objective is to add needed dependencies of C
+# libraries and tweak the build instructions where needed.
+
+{ pkgs, basePythonPackages }:
+
+let
+ sed = "sed -i";
+in
+
+self: super: {
+
+ subvertpy = super.subvertpy.override (attrs: {
+ SVN_PREFIX = "${pkgs.subversion}";
+ propagatedBuildInputs = attrs.propagatedBuildInputs ++ [
+ pkgs.aprutil
+ pkgs.subversion
+ ];
+ preBuild = pkgs.lib.optionalString pkgs.stdenv.isDarwin ''
+ ${sed} -e "s/'gcc'/'clang'/" setup.py
+ '';
+ });
+
+ mercurial = super.mercurial.override (attrs: {
+ propagatedBuildInputs = attrs.propagatedBuildInputs ++ [
+ self.python.modules.curses
+ ] ++ pkgs.lib.optional pkgs.stdenv.isDarwin
+ pkgs.darwin.apple_sdk.frameworks.ApplicationServices;
+ });
+
+ pyramid = super.pyramid.override (attrs: {
+ postFixup = ''
+ wrapPythonPrograms
+ # TODO: johbo: "wrapPython" adds this magic line which
+ # confuses pserve.
+ ${sed} '/import sys; sys.argv/d' $out/bin/.pserve-wrapped
+ '';
+ });
+
+ Pyro4 = super.Pyro4.override (attrs: {
+ # TODO: Was not able to generate this version, needs further
+ # investigation.
+ name = "Pyro4-4.35";
+ src = pkgs.fetchurl {
+ url = "https://pypi.python.org/packages/source/P/Pyro4/Pyro4-4.35.src.tar.gz";
+ md5 = "cbe6cb855f086a0f092ca075005855f3";
+ };
+ });
+
+ # Avoid that setuptools is replaced, this leads to trouble
+ # with buildPythonPackage.
+ setuptools = basePythonPackages.setuptools;
+
+}
diff --git a/pkgs/python-packages.nix b/pkgs/python-packages.nix
new file mode 100644
--- /dev/null
+++ b/pkgs/python-packages.nix
@@ -0,0 +1,363 @@
+{
+ Beaker = super.buildPythonPackage {
+ name = "Beaker-1.7.0";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/97/8e/409d2e7c009b8aa803dc9e6f239f1db7c3cdf578249087a404e7c27a505d/Beaker-1.7.0.tar.gz";
+ md5 = "386be3f7fe427358881eee4622b428b3";
+ };
+ };
+ Jinja2 = super.buildPythonPackage {
+ name = "Jinja2-2.8";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [MarkupSafe];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/f2/2f/0b98b06a345a761bec91a079ccae392d282690c2d8272e708f4d10829e22/Jinja2-2.8.tar.gz";
+ md5 = "edb51693fe22c53cee5403775c71a99e";
+ };
+ };
+ Mako = super.buildPythonPackage {
+ name = "Mako-1.0.4";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [MarkupSafe];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/7a/ae/925434246ee90b42e8ef57d3b30a0ab7caf9a2de3e449b876c56dcb48155/Mako-1.0.4.tar.gz";
+ md5 = "c5fc31a323dd4990683d2f2da02d4e20";
+ };
+ };
+ MarkupSafe = super.buildPythonPackage {
+ name = "MarkupSafe-0.23";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/c0/41/bae1254e0396c0cc8cf1751cb7d9afc90a602353695af5952530482c963f/MarkupSafe-0.23.tar.gz";
+ md5 = "f5ab3deee4c37cd6a922fb81e730da6e";
+ };
+ };
+ PasteDeploy = super.buildPythonPackage {
+ name = "PasteDeploy-1.5.2";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/0f/90/8e20cdae206c543ea10793cbf4136eb9a8b3f417e04e40a29d72d9922cbd/PasteDeploy-1.5.2.tar.gz";
+ md5 = "352b7205c78c8de4987578d19431af3b";
+ };
+ };
+ Pyro4 = super.buildPythonPackage {
+ name = "Pyro4-4.41";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [serpent];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/56/2b/89b566b4bf3e7f8ba790db2d1223852f8cb454c52cab7693dd41f608ca2a/Pyro4-4.41.tar.gz";
+ md5 = "ed69e9bfafa9c06c049a87cb0c4c2b6c";
+ };
+ };
+ WebOb = super.buildPythonPackage {
+ name = "WebOb-1.3.1";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/16/78/adfc0380b8a0d75b2d543fa7085ba98a573b1ae486d9def88d172b81b9fa/WebOb-1.3.1.tar.gz";
+ md5 = "20918251c5726956ba8fef22d1556177";
+ };
+ };
+ WebTest = super.buildPythonPackage {
+ name = "WebTest-1.4.3";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [WebOb];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/51/3d/84fd0f628df10b30c7db87895f56d0158e5411206b721ca903cb51bfd948/WebTest-1.4.3.zip";
+ md5 = "631ce728bed92c681a4020a36adbc353";
+ };
+ };
+ configobj = super.buildPythonPackage {
+ name = "configobj-5.0.6";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [six];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/64/61/079eb60459c44929e684fa7d9e2fdca403f67d64dd9dbac27296be2e0fab/configobj-5.0.6.tar.gz";
+ md5 = "e472a3a1c2a67bb0ec9b5d54c13a47d6";
+ };
+ };
+ dulwich = super.buildPythonPackage {
+ name = "dulwich-0.12.0";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/6f/04/fbe561b6d45c0ec758330d5b7f5ba4b6cb4f1ca1ab49859d2fc16320da75/dulwich-0.12.0.tar.gz";
+ md5 = "f3a8a12bd9f9dd8c233e18f3d49436fa";
+ };
+ };
+ greenlet = super.buildPythonPackage {
+ name = "greenlet-0.4.7";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/7a/9f/a1a0d9bdf3203ae1502c5a8434fe89d323599d78a106985bc327351a69d4/greenlet-0.4.7.zip";
+ md5 = "c2333a8ff30fa75c5d5ec0e67b461086";
+ };
+ };
+ gunicorn = super.buildPythonPackage {
+ name = "gunicorn-19.3.0";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/b0/3d/c476010c920926d2b5b4be0a9a5f5dc0a50c667476ad4737774d44fa7591/gunicorn-19.3.0.tar.gz";
+ md5 = "faa3e80661efd67e5e06bba32699af20";
+ };
+ };
+ hgsubversion = super.buildPythonPackage {
+ name = "hgsubversion-1.8.5";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [mercurial subvertpy];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/f7/8d/3e5719405d4b0b57db7faaf472fb836ed4c437a82bd124a2a37707c33bff/hgsubversion-1.8.5.tar.gz";
+ md5 = "afc3f096fb4dacf1d9210811f81313e0";
+ };
+ };
+ infrae.cache = super.buildPythonPackage {
+ name = "infrae.cache-1.0.1";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [Beaker repoze.lru];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/bb/f0/e7d5e984cf6592fd2807dc7bc44a93f9d18e04e6a61f87fdfb2622422d74/infrae.cache-1.0.1.tar.gz";
+ md5 = "b09076a766747e6ed2a755cc62088e32";
+ };
+ };
+ mercurial = super.buildPythonPackage {
+ name = "mercurial-3.7.3";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/e8/a0/fe6bf60a314a30299c58a5ed67de9fffeae04731201f10dc2822befb062d/mercurial-3.7.3.tar.gz";
+ md5 = "f47c9c76b7bf429dafecb71fa81c01b4";
+ };
+ };
+ mock = super.buildPythonPackage {
+ name = "mock-1.0.1";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/15/45/30273ee91feb60dabb8fbb2da7868520525f02cf910279b3047182feed80/mock-1.0.1.zip";
+ md5 = "869f08d003c289a97c1a6610faf5e913";
+ };
+ };
+ msgpack-python = super.buildPythonPackage {
+ name = "msgpack-python-0.4.6";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/15/ce/ff2840885789ef8035f66cd506ea05bdb228340307d5e71a7b1e3f82224c/msgpack-python-0.4.6.tar.gz";
+ md5 = "8b317669314cf1bc881716cccdaccb30";
+ };
+ };
+ py = super.buildPythonPackage {
+ name = "py-1.4.29";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/2a/bc/a1a4a332ac10069b8e5e25136a35e08a03f01fd6ab03d819889d79a1fd65/py-1.4.29.tar.gz";
+ md5 = "c28e0accba523a29b35a48bb703fb96c";
+ };
+ };
+ pyramid = super.buildPythonPackage {
+ name = "pyramid-1.6.1";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [setuptools WebOb repoze.lru zope.interface zope.deprecation venusian translationstring PasteDeploy];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/30/b3/fcc4a2a4800cbf21989e00454b5828cf1f7fe35c63e0810b350e56d4c475/pyramid-1.6.1.tar.gz";
+ md5 = "b18688ff3cc33efdbb098a35b45dd122";
+ };
+ };
+ pyramid-jinja2 = super.buildPythonPackage {
+ name = "pyramid-jinja2-2.5";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [pyramid zope.deprecation Jinja2 MarkupSafe];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/a1/80/595e26ffab7deba7208676b6936b7e5a721875710f982e59899013cae1ed/pyramid_jinja2-2.5.tar.gz";
+ md5 = "07cb6547204ac5e6f0b22a954ccee928";
+ };
+ };
+ pyramid-mako = super.buildPythonPackage {
+ name = "pyramid-mako-1.0.2";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [pyramid Mako];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/f1/92/7e69bcf09676d286a71cb3bbb887b16595b96f9ba7adbdc239ffdd4b1eb9/pyramid_mako-1.0.2.tar.gz";
+ md5 = "ee25343a97eb76bd90abdc2a774eb48a";
+ };
+ };
+ pytest = super.buildPythonPackage {
+ name = "pytest-2.8.5";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [py];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/b1/3d/d7ea9b0c51e0cacded856e49859f0a13452747491e842c236bbab3714afe/pytest-2.8.5.zip";
+ md5 = "8493b06f700862f1294298d6c1b715a9";
+ };
+ };
+ repoze.lru = super.buildPythonPackage {
+ name = "repoze.lru-0.6";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/6e/1e/aa15cc90217e086dc8769872c8778b409812ff036bf021b15795638939e4/repoze.lru-0.6.tar.gz";
+ md5 = "2c3b64b17a8e18b405f55d46173e14dd";
+ };
+ };
+ rhodecode-vcsserver = super.buildPythonPackage {
+ name = "rhodecode-vcsserver-4.0.0";
+ buildInputs = with self; [mock pytest WebTest];
+ doCheck = true;
+ propagatedBuildInputs = with self; [configobj dulwich hgsubversion infrae.cache mercurial msgpack-python pyramid Pyro4 simplejson subprocess32 waitress WebOb];
+ src = ./.;
+ };
+ serpent = super.buildPythonPackage {
+ name = "serpent-1.12";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/3b/19/1e0e83b47c09edaef8398655088036e7e67386b5c48770218ebb339fbbd5/serpent-1.12.tar.gz";
+ md5 = "05869ac7b062828b34f8f927f0457b65";
+ };
+ };
+ setuptools = super.buildPythonPackage {
+ name = "setuptools-20.8.1";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/c4/19/c1bdc88b53da654df43770f941079dbab4e4788c2dcb5658fb86259894c7/setuptools-20.8.1.zip";
+ md5 = "fe58a5cac0df20bb83942b252a4b0543";
+ };
+ };
+ simplejson = super.buildPythonPackage {
+ name = "simplejson-3.7.2";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/6d/89/7f13f099344eea9d6722779a1f165087cb559598107844b1ac5dbd831fb1/simplejson-3.7.2.tar.gz";
+ md5 = "a5fc7d05d4cb38492285553def5d4b46";
+ };
+ };
+ six = super.buildPythonPackage {
+ name = "six-1.9.0";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/16/64/1dc5e5976b17466fd7d712e59cbe9fb1e18bec153109e5ba3ed6c9102f1a/six-1.9.0.tar.gz";
+ md5 = "476881ef4012262dfc8adc645ee786c4";
+ };
+ };
+ subprocess32 = super.buildPythonPackage {
+ name = "subprocess32-3.2.6";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/28/8d/33ccbff51053f59ae6c357310cac0e79246bbed1d345ecc6188b176d72c3/subprocess32-3.2.6.tar.gz";
+ md5 = "754c5ab9f533e764f931136974b618f1";
+ };
+ };
+ subvertpy = super.buildPythonPackage {
+ name = "subvertpy-0.9.3";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://github.com/jelmer/subvertpy/archive/subvertpy-0.9.3.tar.gz";
+ md5 = "7b745a47128050ea5a73efcd913ec1cf";
+ };
+ };
+ translationstring = super.buildPythonPackage {
+ name = "translationstring-1.3";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/5e/eb/bee578cc150b44c653b63f5ebe258b5d0d812ddac12497e5f80fcad5d0b4/translationstring-1.3.tar.gz";
+ md5 = "a4b62e0f3c189c783a1685b3027f7c90";
+ };
+ };
+ venusian = super.buildPythonPackage {
+ name = "venusian-1.0";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/86/20/1948e0dfc4930ddde3da8c33612f6a5717c0b4bc28f591a5c5cf014dd390/venusian-1.0.tar.gz";
+ md5 = "dccf2eafb7113759d60c86faf5538756";
+ };
+ };
+ waitress = super.buildPythonPackage {
+ name = "waitress-0.8.9";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [setuptools];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/ee/65/fc9dee74a909a1187ca51e4f15ad9c4d35476e4ab5813f73421505c48053/waitress-0.8.9.tar.gz";
+ md5 = "da3f2e62b3676be5dd630703a68e2a04";
+ };
+ };
+ wheel = super.buildPythonPackage {
+ name = "wheel-0.29.0";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/c9/1d/bd19e691fd4cfe908c76c429fe6e4436c9e83583c4414b54f6c85471954a/wheel-0.29.0.tar.gz";
+ md5 = "555a67e4507cedee23a0deb9651e452f";
+ };
+ };
+ zope.deprecation = super.buildPythonPackage {
+ name = "zope.deprecation-4.1.1";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [setuptools];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/c5/c9/e760f131fcde817da6c186a3f4952b8f206b7eeb269bb6f0836c715c5f20/zope.deprecation-4.1.1.tar.gz";
+ md5 = "ce261b9384066f7e13b63525778430cb";
+ };
+ };
+ zope.interface = super.buildPythonPackage {
+ name = "zope.interface-4.1.3";
+ buildInputs = with self; [];
+ doCheck = false;
+ propagatedBuildInputs = with self; [setuptools];
+ src = fetchurl {
+ url = "https://pypi.python.org/packages/9d/81/2509ca3c6f59080123c1a8a97125eb48414022618cec0e64eb1313727bfe/zope.interface-4.1.3.tar.gz";
+ md5 = "9ae3d24c0c7415deb249dd1a132f0f79";
+ };
+ };
+
+### Test requirements
+
+
+}
diff --git a/production.ini b/production.ini
new file mode 100644
--- /dev/null
+++ b/production.ini
@@ -0,0 +1,93 @@
+################################################################################
+# RhodeCode VCSServer - configuration #
+# #
+################################################################################
+
+[DEFAULT]
+host = 127.0.0.1
+port = 9900
+locale = en_US.UTF-8
+# number of worker threads, this should be set based on a formula threadpool=N*6
+# where N is number of RhodeCode Enterprise workers, eg. running 2 instances
+# 8 gunicorn workers each would be 2 * 8 * 6 = 96, threadpool_size = 96
+threadpool_size = 96
+timeout = 0
+
+# cache regions, please don't change
+beaker.cache.regions = repo_object
+beaker.cache.repo_object.type = memorylru
+beaker.cache.repo_object.max_items = 100
+# cache auto-expires after N seconds
+beaker.cache.repo_object.expire = 300
+beaker.cache.repo_object.enabled = true
+
+
+################################
+### LOGGING CONFIGURATION ####
+################################
+[loggers]
+keys = root, vcsserver, pyro4, beaker
+
+[handlers]
+keys = console
+
+[formatters]
+keys = generic
+
+#############
+## LOGGERS ##
+#############
+[logger_root]
+level = NOTSET
+handlers = console
+
+[logger_vcsserver]
+level = DEBUG
+handlers =
+qualname = vcsserver
+propagate = 1
+
+[logger_beaker]
+level = DEBUG
+handlers =
+qualname = beaker
+propagate = 1
+
+[logger_pyro4]
+level = DEBUG
+handlers =
+qualname = Pyro4
+propagate = 1
+
+
+##############
+## HANDLERS ##
+##############
+
+[handler_console]
+class = StreamHandler
+args = (sys.stderr,)
+level = DEBUG
+formatter = generic
+
+[handler_file]
+class = FileHandler
+args = ('vcsserver.log', 'a',)
+level = DEBUG
+formatter = generic
+
+[handler_file_rotating]
+class = logging.handlers.TimedRotatingFileHandler
+# 'D', 5 - rotate every 5days
+# you can set 'h', 'midnight'
+args = ('vcsserver.log', 'D', 5, 10,)
+level = DEBUG
+formatter = generic
+
+################
+## FORMATTERS ##
+################
+
+[formatter_generic]
+format = %(asctime)s.%(msecs)03d %(levelname)-5.5s [%(name)s] %(message)s
+datefmt = %Y-%m-%d %H:%M:%S
diff --git a/release.nix b/release.nix
new file mode 100644
--- /dev/null
+++ b/release.nix
@@ -0,0 +1,13 @@
+{ pkgs ? import {}
+}:
+
+let
+
+ vcsserver = import ./default.nix {
+ inherit
+ pkgs;
+ };
+
+in {
+ build = vcsserver;
+}
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,34 @@
+Beaker==1.7.0
+configobj==5.0.6
+dulwich==0.12.0
+hgsubversion==1.8.5
+infrae.cache==1.0.1
+mercurial==3.7.3
+msgpack-python==0.4.6
+py==1.4.29
+pyramid==1.6.1
+pyramid-jinja2==2.5
+pyramid-mako==1.0.2
+Pyro4==4.41
+pytest==2.8.5
+repoze.lru==0.6
+serpent==1.12
+setuptools==20.8.1
+simplejson==3.7.2
+subprocess32==3.2.6
+# TODO: johbo: This version is not in source on PyPI currently,
+# change back once this or a future version is available
+https://github.com/jelmer/subvertpy/archive/subvertpy-0.9.3.tar.gz#md5=7b745a47128050ea5a73efcd913ec1cf
+six==1.9.0
+translationstring==1.3
+waitress==0.8.9
+WebOb==1.3.1
+wheel==0.29.0
+zope.deprecation==4.1.1
+zope.interface==4.1.3
+greenlet==0.4.7
+gunicorn==19.3.0
+
+# Test related requirements
+mock==1.0.1
+WebTest==1.4.3
diff --git a/setup.py b/setup.py
new file mode 100644
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,102 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+from setuptools import setup, find_packages
+from setuptools.command.test import test as TestCommand
+from codecs import open
+from os import path
+import pkgutil
+import sys
+
+
+here = path.abspath(path.dirname(__file__))
+
+with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
+ long_description = f.read()
+
+
+def get_version():
+ version = pkgutil.get_data('vcsserver', 'VERSION')
+ return version.strip()
+
+
+class PyTest(TestCommand):
+ user_options = [('pytest-args=', 'a', "Arguments to pass to py.test")]
+
+ def initialize_options(self):
+ TestCommand.initialize_options(self)
+ self.pytest_args = []
+
+ def finalize_options(self):
+ TestCommand.finalize_options(self)
+ self.test_args = []
+ self.test_suite = True
+
+ def run_tests(self):
+ # import here, cause outside the eggs aren't loaded
+ import pytest
+ errno = pytest.main(self.pytest_args)
+ sys.exit(errno)
+
+
+setup(
+ name='rhodecode-vcsserver',
+ version=get_version(),
+ description='Version Control System Server',
+ long_description=long_description,
+ url='http://www.rhodecode.com',
+ author='RhodeCode GmbH',
+ author_email='marcin@rhodecode.com',
+ cmdclass={'test': PyTest},
+ license='GPLv3',
+ classifiers=[
+ 'Development Status :: 5 - Production/Stable',
+ 'Intended Audience :: Developers',
+ 'Topic :: Software Development :: Version Control',
+ 'License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)',
+ 'Programming Language :: Python :: 2.7',
+ ],
+ packages=find_packages(),
+ tests_require=[
+ 'mock',
+ 'pytest',
+ 'WebTest',
+ ],
+ install_requires=[
+ 'configobj',
+ 'dulwich',
+ 'hgsubversion',
+ 'infrae.cache',
+ 'mercurial',
+ 'msgpack-python',
+ 'pyramid',
+ 'Pyro4',
+ 'simplejson',
+ 'subprocess32',
+ 'waitress',
+ 'WebOb',
+ ],
+ package_data={
+ 'vcsserver': ['VERSION'],
+ },
+ entry_points={
+ 'console_scripts': [
+ 'vcsserver=vcsserver.main:main',
+ ],
+ 'paste.app_factory': ['main=vcsserver.http_main:main']
+ },
+)
diff --git a/shell.nix b/shell.nix
new file mode 100644
--- /dev/null
+++ b/shell.nix
@@ -0,0 +1,13 @@
+{ pkgs ? (import {})
+}:
+
+let
+ vcsserver = import ./default.nix {inherit pkgs;};
+
+in vcsserver.override (attrs: {
+
+ # Avoid that we dump any sources into the store when entering the shell and
+ # make development a little bit more convenient.
+ src = null;
+
+})
diff --git a/test.ini b/test.ini
new file mode 100644
--- /dev/null
+++ b/test.ini
@@ -0,0 +1,93 @@
+################################################################################
+# RhodeCode VCSServer - configuration #
+# #
+################################################################################
+
+[DEFAULT]
+host = 127.0.0.1
+port = 9901
+locale = en_US.UTF-8
+# number of worker threads, this should be set based on a formula threadpool=N*6
+# where N is number of RhodeCode Enterprise workers, eg. running 2 instances
+# 8 gunicorn workers each would be 2 * 8 * 6 = 96, threadpool_size = 96
+threadpool_size = 96
+timeout = 0
+
+# cache regions, please don't change
+beaker.cache.regions = repo_object
+beaker.cache.repo_object.type = memorylru
+beaker.cache.repo_object.max_items = 100
+# cache auto-expires after N seconds
+beaker.cache.repo_object.expire = 300
+beaker.cache.repo_object.enabled = true
+
+
+################################
+### LOGGING CONFIGURATION ####
+################################
+[loggers]
+keys = root, vcsserver, pyro4, beaker
+
+[handlers]
+keys = console
+
+[formatters]
+keys = generic
+
+#############
+## LOGGERS ##
+#############
+[logger_root]
+level = NOTSET
+handlers = console
+
+[logger_vcsserver]
+level = DEBUG
+handlers =
+qualname = vcsserver
+propagate = 1
+
+[logger_beaker]
+level = DEBUG
+handlers =
+qualname = beaker
+propagate = 1
+
+[logger_pyro4]
+level = DEBUG
+handlers =
+qualname = Pyro4
+propagate = 1
+
+
+##############
+## HANDLERS ##
+##############
+
+[handler_console]
+class = StreamHandler
+args = (sys.stderr,)
+level = INFO
+formatter = generic
+
+[handler_file]
+class = FileHandler
+args = ('vcsserver.log', 'a',)
+level = DEBUG
+formatter = generic
+
+[handler_file_rotating]
+class = logging.handlers.TimedRotatingFileHandler
+# 'D', 5 - rotate every 5days
+# you can set 'h', 'midnight'
+args = ('vcsserver.log', 'D', 5, 10,)
+level = DEBUG
+formatter = generic
+
+################
+## FORMATTERS ##
+################
+
+[formatter_generic]
+format = %(asctime)s.%(msecs)03d %(levelname)-5.5s [%(name)s] %(message)s
+datefmt = %Y-%m-%d %H:%M:%S
diff --git a/tests/conftest.py b/tests/conftest.py
new file mode 100644
--- /dev/null
+++ b/tests/conftest.py
@@ -0,0 +1,57 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import socket
+
+import pytest
+
+
+def pytest_addoption(parser):
+ parser.addoption(
+ '--repeat', type=int, default=100,
+ help="Number of repetitions in performance tests.")
+
+
+@pytest.fixture(scope='session')
+def repeat(request):
+ """
+ The number of repetitions is based on this fixture.
+
+ Slower calls may divide it by 10 or 100. It is chosen in a way so that the
+ tests are not too slow in our default test suite.
+ """
+ return request.config.getoption('--repeat')
+
+
+@pytest.fixture(scope='session')
+def vcsserver_port(request):
+ port = get_available_port()
+ print 'Using vcsserver port %s' % (port, )
+ return port
+
+
+def get_available_port():
+ family = socket.AF_INET
+ socktype = socket.SOCK_STREAM
+ host = '127.0.0.1'
+
+ mysocket = socket.socket(family, socktype)
+ mysocket.bind((host, 0))
+ port = mysocket.getsockname()[1]
+ mysocket.close()
+ del mysocket
+ return port
diff --git a/tests/fixture.py b/tests/fixture.py
new file mode 100644
--- /dev/null
+++ b/tests/fixture.py
@@ -0,0 +1,71 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import os
+import shutil
+import tempfile
+
+import configobj
+
+
+class TestINI(object):
+ """
+ Allows to create a new test.ini file as a copy of existing one with edited
+ data. If existing file is not present, it creates a new one. Example usage::
+
+ with TestINI('test.ini', [{'section': {'key': 'val'}}]) as new_test_ini_path:
+ print 'vcsserver --config=%s' % new_test_ini
+ """
+
+ def __init__(self, ini_file_path, ini_params, new_file_prefix=None,
+ destroy=True):
+ self.ini_file_path = ini_file_path
+ self.ini_params = ini_params
+ self.new_path = None
+ self.new_path_prefix = new_file_prefix or 'test'
+ self.destroy = destroy
+
+ def __enter__(self):
+ _, pref = tempfile.mkstemp()
+ loc = tempfile.gettempdir()
+ self.new_path = os.path.join(loc, '{}_{}_{}'.format(
+ pref, self.new_path_prefix, self.ini_file_path))
+
+ # copy ini file and modify according to the params, if we re-use a file
+ if os.path.isfile(self.ini_file_path):
+ shutil.copy(self.ini_file_path, self.new_path)
+ else:
+ # create new dump file for configObj to write to.
+ with open(self.new_path, 'wb'):
+ pass
+
+ config = configobj.ConfigObj(
+ self.new_path, file_error=True, write_empty_values=True)
+
+ for data in self.ini_params:
+ section, ini_params = data.items()[0]
+ key, val = ini_params.items()[0]
+ if section not in config:
+ config[section] = {}
+ config[section][key] = val
+
+ config.write()
+ return self.new_path
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ if self.destroy:
+ os.remove(self.new_path)
diff --git a/tests/test_git.py b/tests/test_git.py
new file mode 100644
--- /dev/null
+++ b/tests/test_git.py
@@ -0,0 +1,162 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import inspect
+
+import pytest
+import dulwich.errors
+from mock import Mock, patch
+
+from vcsserver import git
+
+
+SAMPLE_REFS = {
+ 'HEAD': 'fd627b9e0dd80b47be81af07c4a98518244ed2f7',
+ 'refs/tags/v0.1.9': '341d28f0eec5ddf0b6b77871e13c2bbd6bec685c',
+ 'refs/tags/v0.1.8': '74ebce002c088b8a5ecf40073db09375515ecd68',
+ 'refs/tags/v0.1.1': 'e6ea6d16e2f26250124a1f4b4fe37a912f9d86a0',
+ 'refs/tags/v0.1.3': '5a3a8fb005554692b16e21dee62bf02667d8dc3e',
+}
+
+
+@pytest.fixture
+def git_remote():
+ """
+ A GitRemote instance with a mock factory.
+ """
+ factory = Mock()
+ remote = git.GitRemote(factory)
+ return remote
+
+
+def test_discover_git_version(git_remote):
+ version = git_remote.discover_git_version()
+ assert version
+
+
+class TestGitFetch(object):
+ def setup(self):
+ self.mock_repo = Mock()
+ factory = Mock()
+ factory.repo = Mock(return_value=self.mock_repo)
+ self.remote_git = git.GitRemote(factory)
+
+ def test_fetches_all_when_no_commit_ids_specified(self):
+ def side_effect(determine_wants, *args, **kwargs):
+ determine_wants(SAMPLE_REFS)
+
+ with patch('dulwich.client.LocalGitClient.fetch') as mock_fetch:
+ mock_fetch.side_effect = side_effect
+ self.remote_git.fetch(wire=None, url='/tmp/', apply_refs=False)
+ determine_wants = self.mock_repo.object_store.determine_wants_all
+ determine_wants.assert_called_once_with(SAMPLE_REFS)
+
+ def test_fetches_specified_commits(self):
+ selected_refs = {
+ 'refs/tags/v0.1.8': '74ebce002c088b8a5ecf40073db09375515ecd68',
+ 'refs/tags/v0.1.3': '5a3a8fb005554692b16e21dee62bf02667d8dc3e',
+ }
+
+ def side_effect(determine_wants, *args, **kwargs):
+ result = determine_wants(SAMPLE_REFS)
+ assert sorted(result) == sorted(selected_refs.values())
+ return result
+
+ with patch('dulwich.client.LocalGitClient.fetch') as mock_fetch:
+ mock_fetch.side_effect = side_effect
+ self.remote_git.fetch(
+ wire=None, url='/tmp/', apply_refs=False,
+ refs=selected_refs.keys())
+ determine_wants = self.mock_repo.object_store.determine_wants_all
+ assert determine_wants.call_count == 0
+
+ def test_get_remote_refs(self):
+ factory = Mock()
+ remote_git = git.GitRemote(factory)
+ url = 'http://example.com/test/test.git'
+ sample_refs = {
+ 'refs/tags/v0.1.8': '74ebce002c088b8a5ecf40073db09375515ecd68',
+ 'refs/tags/v0.1.3': '5a3a8fb005554692b16e21dee62bf02667d8dc3e',
+ }
+
+ with patch('vcsserver.git.Repo', create=False) as mock_repo:
+ mock_repo().get_refs.return_value = sample_refs
+ remote_refs = remote_git.get_remote_refs(wire=None, url=url)
+ mock_repo().get_refs.assert_called_once_with()
+ assert remote_refs == sample_refs
+
+ def test_remove_ref(self):
+ ref_to_remove = 'refs/tags/v0.1.9'
+ self.mock_repo.refs = SAMPLE_REFS.copy()
+ self.remote_git.remove_ref(None, ref_to_remove)
+ assert ref_to_remove not in self.mock_repo.refs
+
+
+class TestReraiseSafeExceptions(object):
+ def test_method_decorated_with_reraise_safe_exceptions(self):
+ factory = Mock()
+ git_remote = git.GitRemote(factory)
+
+ def fake_function():
+ return None
+
+ decorator = git.reraise_safe_exceptions(fake_function)
+
+ methods = inspect.getmembers(git_remote, predicate=inspect.ismethod)
+ for method_name, method in methods:
+ if not method_name.startswith('_'):
+ assert method.im_func.__code__ == decorator.__code__
+
+ @pytest.mark.parametrize('side_effect, expected_type', [
+ (dulwich.errors.ChecksumMismatch('0000000', 'deadbeef'), 'lookup'),
+ (dulwich.errors.NotCommitError('deadbeef'), 'lookup'),
+ (dulwich.errors.MissingCommitError('deadbeef'), 'lookup'),
+ (dulwich.errors.ObjectMissing('deadbeef'), 'lookup'),
+ (dulwich.errors.HangupException(), 'error'),
+ (dulwich.errors.UnexpectedCommandError('test-cmd'), 'error'),
+ ])
+ def test_safe_exceptions_reraised(self, side_effect, expected_type):
+ @git.reraise_safe_exceptions
+ def fake_method():
+ raise side_effect
+
+ with pytest.raises(Exception) as exc_info:
+ fake_method()
+ assert type(exc_info.value) == Exception
+ assert exc_info.value._vcs_kind == expected_type
+
+
+class TestDulwichRepoWrapper(object):
+ def test_calls_close_on_delete(self):
+ isdir_patcher = patch('dulwich.repo.os.path.isdir', return_value=True)
+ with isdir_patcher:
+ repo = git.Repo('/tmp/abcde')
+ with patch.object(git.DulwichRepo, 'close') as close_mock:
+ del repo
+ close_mock.assert_called_once_with()
+
+
+class TestGitFactory(object):
+ def test_create_repo_returns_dulwich_wrapper(self):
+ factory = git.GitFactory(repo_cache=Mock())
+ wire = {
+ 'path': '/tmp/abcde'
+ }
+ isdir_patcher = patch('dulwich.repo.os.path.isdir', return_value=True)
+ with isdir_patcher:
+ result = factory._create_repo(wire, True)
+ assert isinstance(result, git.Repo)
diff --git a/tests/test_hg.py b/tests/test_hg.py
new file mode 100644
--- /dev/null
+++ b/tests/test_hg.py
@@ -0,0 +1,127 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import inspect
+import sys
+import traceback
+
+import pytest
+from mercurial.error import LookupError
+from mock import Mock, MagicMock, patch
+
+from vcsserver import exceptions, hg, hgcompat
+
+
+class TestHGLookup(object):
+ def setup(self):
+ self.mock_repo = MagicMock()
+ self.mock_repo.__getitem__.side_effect = LookupError(
+ 'revision_or_commit_id', 'index', 'message')
+ factory = Mock()
+ factory.repo = Mock(return_value=self.mock_repo)
+ self.remote_hg = hg.HgRemote(factory)
+
+ def test_fail_lookup_hg(self):
+ with pytest.raises(Exception) as exc_info:
+ self.remote_hg.lookup(
+ wire=None, revision='revision_or_commit_id', both=True)
+
+ assert exc_info.value._vcs_kind == 'lookup'
+ assert 'revision_or_commit_id' in exc_info.value.args
+
+
+class TestDiff(object):
+ def test_raising_safe_exception_when_lookup_failed(self):
+ repo = Mock()
+ factory = Mock()
+ factory.repo = Mock(return_value=repo)
+ hg_remote = hg.HgRemote(factory)
+ with patch('mercurial.patch.diff') as diff_mock:
+ diff_mock.side_effect = LookupError(
+ 'deadbeef', 'index', 'message')
+ with pytest.raises(Exception) as exc_info:
+ hg_remote.diff(
+ wire=None, rev1='deadbeef', rev2='deadbee1',
+ file_filter=None, opt_git=True, opt_ignorews=True,
+ context=3)
+ assert type(exc_info.value) == Exception
+ assert exc_info.value._vcs_kind == 'lookup'
+
+
+class TestReraiseSafeExceptions(object):
+ def test_method_decorated_with_reraise_safe_exceptions(self):
+ factory = Mock()
+ hg_remote = hg.HgRemote(factory)
+ methods = inspect.getmembers(hg_remote, predicate=inspect.ismethod)
+ decorator = hg.reraise_safe_exceptions(None)
+ for method_name, method in methods:
+ if not method_name.startswith('_'):
+ assert method.im_func.__code__ == decorator.__code__
+
+ @pytest.mark.parametrize('side_effect, expected_type', [
+ (hgcompat.Abort(), 'abort'),
+ (hgcompat.InterventionRequired(), 'abort'),
+ (hgcompat.RepoLookupError(), 'lookup'),
+ (hgcompat.LookupError('deadbeef', 'index', 'message'), 'lookup'),
+ (hgcompat.RepoError(), 'error'),
+ (hgcompat.RequirementError(), 'requirement'),
+ ])
+ def test_safe_exceptions_reraised(self, side_effect, expected_type):
+ @hg.reraise_safe_exceptions
+ def fake_method():
+ raise side_effect
+
+ with pytest.raises(Exception) as exc_info:
+ fake_method()
+ assert type(exc_info.value) == Exception
+ assert exc_info.value._vcs_kind == expected_type
+
+ def test_keeps_original_traceback(self):
+ @hg.reraise_safe_exceptions
+ def fake_method():
+ try:
+ raise hgcompat.Abort()
+ except:
+ self.original_traceback = traceback.format_tb(
+ sys.exc_info()[2])
+ raise
+
+ try:
+ fake_method()
+ except Exception:
+ new_traceback = traceback.format_tb(sys.exc_info()[2])
+
+ new_traceback_tail = new_traceback[-len(self.original_traceback):]
+ assert new_traceback_tail == self.original_traceback
+
+ def test_maps_unknow_exceptions_to_unhandled(self):
+ @hg.reraise_safe_exceptions
+ def stub_method():
+ raise ValueError('stub')
+
+ with pytest.raises(Exception) as exc_info:
+ stub_method()
+ assert exc_info.value._vcs_kind == 'unhandled'
+
+ def test_does_not_map_known_exceptions(self):
+ @hg.reraise_safe_exceptions
+ def stub_method():
+ raise exceptions.LookupException('stub')
+
+ with pytest.raises(Exception) as exc_info:
+ stub_method()
+ assert exc_info.value._vcs_kind == 'lookup'
diff --git a/tests/test_hgpatches.py b/tests/test_hgpatches.py
new file mode 100644
--- /dev/null
+++ b/tests/test_hgpatches.py
@@ -0,0 +1,125 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import mock
+import pytest
+
+from vcsserver import hgcompat, hgpatches
+
+
+LARGEFILES_CAPABILITY = 'largefiles=serve'
+
+
+def test_patch_largefiles_capabilities_applies_patch(
+ patched_capabilities):
+ lfproto = hgcompat.largefiles.proto
+ hgpatches.patch_largefiles_capabilities()
+ assert lfproto.capabilities.func_name == '_dynamic_capabilities'
+
+
+def test_dynamic_capabilities_uses_original_function_if_not_enabled(
+ stub_repo, stub_proto, stub_ui, stub_extensions, patched_capabilities):
+ dynamic_capabilities = hgpatches._dynamic_capabilities_wrapper(
+ hgcompat.largefiles.proto, stub_extensions)
+
+ caps = dynamic_capabilities(stub_repo, stub_proto)
+
+ stub_extensions.assert_called_once_with(stub_ui)
+ assert LARGEFILES_CAPABILITY not in caps
+
+
+def test_dynamic_capabilities_uses_updated_capabilitiesorig(
+ stub_repo, stub_proto, stub_ui, stub_extensions, patched_capabilities):
+ dynamic_capabilities = hgpatches._dynamic_capabilities_wrapper(
+ hgcompat.largefiles.proto, stub_extensions)
+
+ # This happens when the extension is loaded for the first time, important
+ # to ensure that an updated function is correctly picked up.
+ hgcompat.largefiles.proto.capabilitiesorig = mock.Mock(
+ return_value='REPLACED')
+
+ caps = dynamic_capabilities(stub_repo, stub_proto)
+ assert 'REPLACED' == caps
+
+
+def test_dynamic_capabilities_ignores_updated_capabilities(
+ stub_repo, stub_proto, stub_ui, stub_extensions, patched_capabilities):
+ stub_extensions.return_value = [('largefiles', mock.Mock())]
+ dynamic_capabilities = hgpatches._dynamic_capabilities_wrapper(
+ hgcompat.largefiles.proto, stub_extensions)
+
+ # This happens when the extension is loaded for the first time, important
+ # to ensure that an updated function is correctly picked up.
+ hgcompat.largefiles.proto.capabilities = mock.Mock(
+ side_effect=Exception('Must not be called'))
+
+ dynamic_capabilities(stub_repo, stub_proto)
+
+
+def test_dynamic_capabilities_uses_largefiles_if_enabled(
+ stub_repo, stub_proto, stub_ui, stub_extensions, patched_capabilities):
+ stub_extensions.return_value = [('largefiles', mock.Mock())]
+
+ dynamic_capabilities = hgpatches._dynamic_capabilities_wrapper(
+ hgcompat.largefiles.proto, stub_extensions)
+
+ caps = dynamic_capabilities(stub_repo, stub_proto)
+
+ stub_extensions.assert_called_once_with(stub_ui)
+ assert LARGEFILES_CAPABILITY in caps
+
+
+@pytest.fixture
+def patched_capabilities(request):
+ """
+ Patch in `capabilitiesorig` and restore both capability functions.
+ """
+ lfproto = hgcompat.largefiles.proto
+ orig_capabilities = lfproto.capabilities
+ orig_capabilitiesorig = lfproto.capabilitiesorig
+
+ lfproto.capabilitiesorig = mock.Mock(return_value='ORIG')
+
+ @request.addfinalizer
+ def restore():
+ lfproto.capabilities = orig_capabilities
+ lfproto.capabilitiesorig = orig_capabilitiesorig
+
+
+@pytest.fixture
+def stub_repo(stub_ui):
+ repo = mock.Mock()
+ repo.ui = stub_ui
+ return repo
+
+
+@pytest.fixture
+def stub_proto(stub_ui):
+ proto = mock.Mock()
+ proto.ui = stub_ui
+ return proto
+
+
+@pytest.fixture
+def stub_ui():
+ return hgcompat.ui.ui()
+
+
+@pytest.fixture
+def stub_extensions():
+ extensions = mock.Mock(return_value=tuple())
+ return extensions
diff --git a/tests/test_hooks.py b/tests/test_hooks.py
new file mode 100644
--- /dev/null
+++ b/tests/test_hooks.py
@@ -0,0 +1,549 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import contextlib
+import io
+import threading
+from BaseHTTPServer import BaseHTTPRequestHandler
+from SocketServer import TCPServer
+
+import mercurial.ui
+import mock
+import pytest
+import simplejson as json
+
+from vcsserver import hooks
+
+
+class HooksStub(object):
+ """
+ Simulates a Proy4.Proxy object.
+
+ Will always return `result`, no matter which hook has been called on it.
+ """
+
+ def __init__(self, result):
+ self._result = result
+
+ def __call__(self, hooks_uri):
+ return self
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, exc_type, exc_value, traceback):
+ pass
+
+ def __getattr__(self, name):
+ return mock.Mock(return_value=self._result)
+
+
+@contextlib.contextmanager
+def mock_hook_response(
+ status=0, output='', exception=None, exception_args=None):
+ response = {
+ 'status': status,
+ 'output': output,
+ }
+ if exception:
+ response.update({
+ 'exception': exception,
+ 'exception_args': exception_args,
+ })
+
+ with mock.patch('Pyro4.Proxy', HooksStub(response)):
+ yield
+
+
+def get_hg_ui(extras=None):
+ """Create a Config object with a valid RC_SCM_DATA entry."""
+ extras = extras or {}
+ required_extras = {
+ 'username': '',
+ 'repository': '',
+ 'locked_by': '',
+ 'scm': '',
+ 'make_lock': '',
+ 'action': '',
+ 'ip': '',
+ 'hooks_uri': 'fake_hooks_uri',
+ }
+ required_extras.update(extras)
+ hg_ui = mercurial.ui.ui()
+ hg_ui.setconfig('rhodecode', 'RC_SCM_DATA', json.dumps(required_extras))
+
+ return hg_ui
+
+
+def test_call_hook_no_error(capsys):
+ extras = {
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ expected_output = 'My mock outptut'
+ writer = mock.Mock()
+
+ with mock_hook_response(status=1, output=expected_output):
+ hooks._call_hook('hook_name', extras, writer)
+
+ out, err = capsys.readouterr()
+
+ writer.write.assert_called_with(expected_output)
+ assert err == ''
+
+
+def test_call_hook_with_exception(capsys):
+ extras = {
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ expected_output = 'My mock outptut'
+ writer = mock.Mock()
+
+ with mock_hook_response(status=1, output=expected_output,
+ exception='TypeError',
+ exception_args=('Mock exception', )):
+ with pytest.raises(Exception) as excinfo:
+ hooks._call_hook('hook_name', extras, writer)
+
+ assert excinfo.type == Exception
+ assert 'Mock exception' in str(excinfo.value)
+
+ out, err = capsys.readouterr()
+
+ writer.write.assert_called_with(expected_output)
+ assert err == ''
+
+
+def test_call_hook_with_locked_exception(capsys):
+ extras = {
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ expected_output = 'My mock outptut'
+ writer = mock.Mock()
+
+ with mock_hook_response(status=1, output=expected_output,
+ exception='HTTPLockedRC',
+ exception_args=('message',)):
+ with pytest.raises(Exception) as excinfo:
+ hooks._call_hook('hook_name', extras, writer)
+
+ assert excinfo.value._vcs_kind == 'repo_locked'
+ assert 'message' == str(excinfo.value)
+
+ out, err = capsys.readouterr()
+
+ writer.write.assert_called_with(expected_output)
+ assert err == ''
+
+
+def test_call_hook_with_stdout():
+ extras = {
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ expected_output = 'My mock outptut'
+
+ stdout = io.BytesIO()
+ with mock_hook_response(status=1, output=expected_output):
+ hooks._call_hook('hook_name', extras, stdout)
+
+ assert stdout.getvalue() == expected_output
+
+
+def test_repo_size():
+ hg_ui = get_hg_ui()
+
+ with mock_hook_response(status=1):
+ assert hooks.repo_size(hg_ui, None) == 1
+
+
+def test_pre_pull():
+ hg_ui = get_hg_ui()
+
+ with mock_hook_response(status=1):
+ assert hooks.pre_pull(hg_ui, None) == 1
+
+
+def test_post_pull():
+ hg_ui = get_hg_ui()
+
+ with mock_hook_response(status=1):
+ assert hooks.post_pull(hg_ui, None) == 1
+
+
+def test_pre_push():
+ hg_ui = get_hg_ui()
+
+ with mock_hook_response(status=1):
+ assert hooks.pre_push(hg_ui, None) == 1
+
+
+def test_post_push():
+ hg_ui = get_hg_ui()
+
+ with mock_hook_response(status=1):
+ with mock.patch('vcsserver.hooks._rev_range_hash', return_value=[]):
+ assert hooks.post_push(hg_ui, None, None) == 1
+
+
+def test_git_pre_receive():
+ extras = {
+ 'hooks': ['push'],
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ with mock_hook_response(status=1):
+ response = hooks.git_pre_receive(None, None,
+ {'RC_SCM_DATA': json.dumps(extras)})
+ assert response == 1
+
+
+def test_git_pre_receive_is_disabled():
+ extras = {'hooks': ['pull']}
+ response = hooks.git_pre_receive(None, None,
+ {'RC_SCM_DATA': json.dumps(extras)})
+
+ assert response == 0
+
+
+def test_git_post_receive_no_subprocess_call():
+ extras = {
+ 'hooks': ['push'],
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ # Setting revision_lines to '' avoid all subprocess_calls
+ with mock_hook_response(status=1):
+ response = hooks.git_post_receive(None, '',
+ {'RC_SCM_DATA': json.dumps(extras)})
+ assert response == 1
+
+
+def test_git_post_receive_is_disabled():
+ extras = {'hooks': ['pull']}
+ response = hooks.git_post_receive(None, '',
+ {'RC_SCM_DATA': json.dumps(extras)})
+
+ assert response == 0
+
+
+def test_git_post_receive_calls_repo_size():
+ extras = {'hooks': ['push', 'repo_size']}
+ with mock.patch.object(hooks, '_call_hook') as call_hook_mock:
+ hooks.git_post_receive(
+ None, '', {'RC_SCM_DATA': json.dumps(extras)})
+ extras.update({'commit_ids': []})
+ expected_calls = [
+ mock.call('repo_size', extras, mock.ANY),
+ mock.call('post_push', extras, mock.ANY),
+ ]
+ assert call_hook_mock.call_args_list == expected_calls
+
+
+def test_git_post_receive_does_not_call_disabled_repo_size():
+ extras = {'hooks': ['push']}
+ with mock.patch.object(hooks, '_call_hook') as call_hook_mock:
+ hooks.git_post_receive(
+ None, '', {'RC_SCM_DATA': json.dumps(extras)})
+ extras.update({'commit_ids': []})
+ expected_calls = [
+ mock.call('post_push', extras, mock.ANY)
+ ]
+ assert call_hook_mock.call_args_list == expected_calls
+
+
+def test_repo_size_exception_does_not_affect_git_post_receive():
+ extras = {'hooks': ['push', 'repo_size']}
+ status = 0
+
+ def side_effect(name, *args, **kwargs):
+ if name == 'repo_size':
+ raise Exception('Fake exception')
+ else:
+ return status
+
+ with mock.patch.object(hooks, '_call_hook') as call_hook_mock:
+ call_hook_mock.side_effect = side_effect
+ result = hooks.git_post_receive(
+ None, '', {'RC_SCM_DATA': json.dumps(extras)})
+ assert result == status
+
+
+@mock.patch('vcsserver.hooks._run_command')
+def test_git_post_receive_first_commit_sub_branch(cmd_mock):
+ def cmd_mock_returns(args):
+ if args == ['git', 'show', 'HEAD']:
+ raise
+ if args == ['git', 'for-each-ref', '--format=%(refname)',
+ 'refs/heads/*']:
+ return 'refs/heads/test-branch2/sub-branch'
+ if args == ['git', 'log', '--reverse', '--pretty=format:%H', '--',
+ '9695eef57205c17566a3ae543be187759b310bb7', '--not',
+ 'refs/heads/test-branch2/sub-branch']:
+ return ''
+
+ cmd_mock.side_effect = cmd_mock_returns
+
+ extras = {
+ 'hooks': ['push'],
+ 'hooks_uri': 'fake_hook_uri'
+ }
+ rev_lines = ['0000000000000000000000000000000000000000 '
+ '9695eef57205c17566a3ae543be187759b310bb7 '
+ 'refs/heads/feature/sub-branch\n']
+ with mock_hook_response(status=0):
+ response = hooks.git_post_receive(None, rev_lines,
+ {'RC_SCM_DATA': json.dumps(extras)})
+
+ calls = [
+ mock.call(['git', 'show', 'HEAD']),
+ mock.call(['git', 'symbolic-ref', 'HEAD',
+ 'refs/heads/feature/sub-branch']),
+ ]
+ cmd_mock.assert_has_calls(calls, any_order=True)
+ assert response == 0
+
+
+@mock.patch('vcsserver.hooks._run_command')
+def test_git_post_receive_first_commit_revs(cmd_mock):
+ extras = {
+ 'hooks': ['push'],
+ 'hooks_uri': 'fake_hook_uri'
+ }
+ rev_lines = [
+ '0000000000000000000000000000000000000000 '
+ '9695eef57205c17566a3ae543be187759b310bb7 refs/heads/master\n']
+ with mock_hook_response(status=0):
+ response = hooks.git_post_receive(
+ None, rev_lines, {'RC_SCM_DATA': json.dumps(extras)})
+
+ calls = [
+ mock.call(['git', 'show', 'HEAD']),
+ mock.call(['git', 'for-each-ref', '--format=%(refname)',
+ 'refs/heads/*']),
+ mock.call(['git', 'log', '--reverse', '--pretty=format:%H',
+ '--', '9695eef57205c17566a3ae543be187759b310bb7', '--not',
+ ''])
+ ]
+ cmd_mock.assert_has_calls(calls, any_order=True)
+
+ assert response == 0
+
+
+def test_git_pre_pull():
+ extras = {
+ 'hooks': ['pull'],
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ with mock_hook_response(status=1, output='foo'):
+ assert hooks.git_pre_pull(extras) == hooks.HookResponse(1, 'foo')
+
+
+def test_git_pre_pull_exception_is_caught():
+ extras = {
+ 'hooks': ['pull'],
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ with mock_hook_response(status=2, exception=Exception('foo')):
+ assert hooks.git_pre_pull(extras).status == 128
+
+
+def test_git_pre_pull_is_disabled():
+ assert hooks.git_pre_pull({'hooks': ['push']}) == hooks.HookResponse(0, '')
+
+
+def test_git_post_pull():
+ extras = {
+ 'hooks': ['pull'],
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ with mock_hook_response(status=1, output='foo'):
+ assert hooks.git_post_pull(extras) == hooks.HookResponse(1, 'foo')
+
+
+def test_git_post_pull_exception_is_caught():
+ extras = {
+ 'hooks': ['pull'],
+ 'hooks_uri': 'fake_hook_uri',
+ }
+ with mock_hook_response(status=2, exception='Exception',
+ exception_args=('foo',)):
+ assert hooks.git_post_pull(extras).status == 128
+
+
+def test_git_post_pull_is_disabled():
+ assert (
+ hooks.git_post_pull({'hooks': ['push']}) == hooks.HookResponse(0, ''))
+
+
+class TestGetHooksClient(object):
+ def test_returns_pyro_client_when_protocol_matches(self):
+ hooks_uri = 'localhost:8000'
+ result = hooks._get_hooks_client({
+ 'hooks_uri': hooks_uri,
+ 'hooks_protocol': 'pyro4'
+ })
+ assert isinstance(result, hooks.HooksPyro4Client)
+ assert result.hooks_uri == hooks_uri
+
+ def test_returns_http_client_when_protocol_matches(self):
+ hooks_uri = 'localhost:8000'
+ result = hooks._get_hooks_client({
+ 'hooks_uri': hooks_uri,
+ 'hooks_protocol': 'http'
+ })
+ assert isinstance(result, hooks.HooksHttpClient)
+ assert result.hooks_uri == hooks_uri
+
+ def test_returns_pyro4_client_when_no_protocol_is_specified(self):
+ hooks_uri = 'localhost:8000'
+ result = hooks._get_hooks_client({
+ 'hooks_uri': hooks_uri
+ })
+ assert isinstance(result, hooks.HooksPyro4Client)
+ assert result.hooks_uri == hooks_uri
+
+ def test_returns_dummy_client_when_hooks_uri_not_specified(self):
+ fake_module = mock.Mock()
+ import_patcher = mock.patch.object(
+ hooks.importlib, 'import_module', return_value=fake_module)
+ fake_module_name = 'fake.module'
+ with import_patcher as import_mock:
+ result = hooks._get_hooks_client(
+ {'hooks_module': fake_module_name})
+
+ import_mock.assert_called_once_with(fake_module_name)
+ assert isinstance(result, hooks.HooksDummyClient)
+ assert result._hooks_module == fake_module
+
+
+class TestHooksHttpClient(object):
+ def test_init_sets_hooks_uri(self):
+ uri = 'localhost:3000'
+ client = hooks.HooksHttpClient(uri)
+ assert client.hooks_uri == uri
+
+ def test_serialize_returns_json_string(self):
+ client = hooks.HooksHttpClient('localhost:3000')
+ hook_name = 'test'
+ extras = {
+ 'first': 1,
+ 'second': 'two'
+ }
+ result = client._serialize(hook_name, extras)
+ expected_result = json.dumps({
+ 'method': hook_name,
+ 'extras': extras
+ })
+ assert result == expected_result
+
+ def test_call_queries_http_server(self, http_mirror):
+ client = hooks.HooksHttpClient(http_mirror.uri)
+ hook_name = 'test'
+ extras = {
+ 'first': 1,
+ 'second': 'two'
+ }
+ result = client(hook_name, extras)
+ expected_result = {
+ 'method': hook_name,
+ 'extras': extras
+ }
+ assert result == expected_result
+
+
+class TestHooksDummyClient(object):
+ def test_init_imports_hooks_module(self):
+ hooks_module_name = 'rhodecode.fake.module'
+ hooks_module = mock.MagicMock()
+
+ import_patcher = mock.patch.object(
+ hooks.importlib, 'import_module', return_value=hooks_module)
+ with import_patcher as import_mock:
+ client = hooks.HooksDummyClient(hooks_module_name)
+ import_mock.assert_called_once_with(hooks_module_name)
+ assert client._hooks_module == hooks_module
+
+ def test_call_returns_hook_result(self):
+ hooks_module_name = 'rhodecode.fake.module'
+ hooks_module = mock.MagicMock()
+ import_patcher = mock.patch.object(
+ hooks.importlib, 'import_module', return_value=hooks_module)
+ with import_patcher:
+ client = hooks.HooksDummyClient(hooks_module_name)
+
+ result = client('post_push', {})
+ hooks_module.Hooks.assert_called_once_with()
+ assert result == hooks_module.Hooks().__enter__().post_push()
+
+
+class TestHooksPyro4Client(object):
+ def test_init_sets_hooks_uri(self):
+ uri = 'localhost:3000'
+ client = hooks.HooksPyro4Client(uri)
+ assert client.hooks_uri == uri
+
+ def test_call_returns_hook_value(self):
+ hooks_uri = 'localhost:3000'
+ client = hooks.HooksPyro4Client(hooks_uri)
+ hooks_module = mock.Mock()
+ context_manager = mock.MagicMock()
+ context_manager.__enter__.return_value = hooks_module
+ pyro4_patcher = mock.patch.object(
+ hooks.Pyro4, 'Proxy', return_value=context_manager)
+ extras = {
+ 'test': 'test'
+ }
+ with pyro4_patcher as pyro4_mock:
+ result = client('post_push', extras)
+ pyro4_mock.assert_called_once_with(hooks_uri)
+ hooks_module.post_push.assert_called_once_with(extras)
+ assert result == hooks_module.post_push.return_value
+
+
+@pytest.fixture
+def http_mirror(request):
+ server = MirrorHttpServer()
+ request.addfinalizer(server.stop)
+ return server
+
+
+class MirrorHttpHandler(BaseHTTPRequestHandler):
+ def do_POST(self):
+ length = int(self.headers['Content-Length'])
+ body = self.rfile.read(length).decode('utf-8')
+ self.send_response(200)
+ self.end_headers()
+ self.wfile.write(body)
+
+
+class MirrorHttpServer(object):
+ ip_address = '127.0.0.1'
+ port = 0
+
+ def __init__(self):
+ self._daemon = TCPServer((self.ip_address, 0), MirrorHttpHandler)
+ _, self.port = self._daemon.server_address
+ self._thread = threading.Thread(target=self._daemon.serve_forever)
+ self._thread.daemon = True
+ self._thread.start()
+
+ def stop(self):
+ self._daemon.shutdown()
+ self._thread.join()
+ self._daemon = None
+ self._thread = None
+
+ @property
+ def uri(self):
+ return '{}:{}'.format(self.ip_address, self.port)
diff --git a/tests/test_http_performance.py b/tests/test_http_performance.py
new file mode 100644
--- /dev/null
+++ b/tests/test_http_performance.py
@@ -0,0 +1,44 @@
+"""
+Tests used to profile the HTTP based implementation.
+"""
+
+import pytest
+import webtest
+
+from vcsserver.http_main import main
+
+
+@pytest.fixture
+def vcs_app():
+ stub_settings = {
+ 'dev.use_echo_app': 'true',
+ 'beaker.cache.regions': 'repo_object',
+ 'beaker.cache.repo_object.type': 'memorylru',
+ 'beaker.cache.repo_object.max_items': '100',
+ 'beaker.cache.repo_object.expire': '300',
+ 'beaker.cache.repo_object.enabled': 'true',
+ 'locale': 'en_US.UTF-8',
+ }
+ vcs_app = main({}, **stub_settings)
+ app = webtest.TestApp(vcs_app)
+ return app
+
+
+@pytest.fixture(scope='module')
+def data():
+ one_kb = 'x' * 1024
+ return one_kb * 1024 * 10
+
+
+def test_http_app_streaming_with_data(data, repeat, vcs_app):
+ app = vcs_app
+ for x in xrange(repeat / 10):
+ response = app.post('/stream/git/', params=data)
+ assert response.status_code == 200
+
+
+def test_http_app_streaming_no_data(repeat, vcs_app):
+ app = vcs_app
+ for x in xrange(repeat / 10):
+ response = app.post('/stream/git/')
+ assert response.status_code == 200
diff --git a/tests/test_main.py b/tests/test_main.py
new file mode 100644
--- /dev/null
+++ b/tests/test_main.py
@@ -0,0 +1,36 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import mock
+
+from vcsserver import main
+
+
+@mock.patch('vcsserver.main.VcsServerCommand', mock.Mock())
+@mock.patch('vcsserver.hgpatches.patch_largefiles_capabilities')
+def test_applies_largefiles_patch(patch_largefiles_capabilities):
+ main.main([])
+ patch_largefiles_capabilities.assert_called_once_with()
+
+
+@mock.patch('vcsserver.main.VcsServerCommand', mock.Mock())
+@mock.patch('vcsserver.main.MercurialFactory', None)
+@mock.patch(
+ 'vcsserver.hgpatches.patch_largefiles_capabilities',
+ mock.Mock(side_effect=Exception("Must not be called")))
+def test_applies_largefiles_patch_only_if_mercurial_is_available():
+ main.main([])
diff --git a/tests/test_pygrack.py b/tests/test_pygrack.py
new file mode 100644
--- /dev/null
+++ b/tests/test_pygrack.py
@@ -0,0 +1,249 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import io
+
+import dulwich.protocol
+import mock
+import pytest
+import webob
+import webtest
+
+from vcsserver import hooks, pygrack
+
+# pylint: disable=redefined-outer-name,protected-access
+
+
+@pytest.fixture()
+def pygrack_instance(tmpdir):
+ """
+ Creates a pygrack app instance.
+
+ Right now, it does not much helpful regarding the passed directory.
+ It just contains the required folders to pass the signature test.
+ """
+ for dir_name in ('config', 'head', 'info', 'objects', 'refs'):
+ tmpdir.mkdir(dir_name)
+
+ return pygrack.GitRepository('repo_name', str(tmpdir), 'git', False, {})
+
+
+@pytest.fixture()
+def pygrack_app(pygrack_instance):
+ """
+ Creates a pygrack app wrapped in webtest.TestApp.
+ """
+ return webtest.TestApp(pygrack_instance)
+
+
+def test_invalid_service_info_refs_returns_403(pygrack_app):
+ response = pygrack_app.get('/info/refs?service=git-upload-packs',
+ expect_errors=True)
+
+ assert response.status_int == 403
+
+
+def test_invalid_endpoint_returns_403(pygrack_app):
+ response = pygrack_app.post('/git-upload-packs', expect_errors=True)
+
+ assert response.status_int == 403
+
+
+@pytest.mark.parametrize('sideband', [
+ 'side-band-64k',
+ 'side-band',
+ 'side-band no-progress',
+])
+def test_pre_pull_hook_fails_with_sideband(pygrack_app, sideband):
+ request = ''.join([
+ '0054want 74730d410fcb6603ace96f1dc55ea6196122532d ',
+ 'multi_ack %s ofs-delta\n' % sideband,
+ '0000',
+ '0009done\n',
+ ])
+ with mock.patch('vcsserver.hooks.git_pre_pull',
+ return_value=hooks.HookResponse(1, 'foo')):
+ response = pygrack_app.post(
+ '/git-upload-pack', params=request,
+ content_type='application/x-git-upload-pack')
+
+ data = io.BytesIO(response.body)
+ proto = dulwich.protocol.Protocol(data.read, None)
+ packets = list(proto.read_pkt_seq())
+
+ expected_packets = [
+ 'NAK\n', '\x02foo', '\x02Pre pull hook failed: aborting\n',
+ '\x01' + pygrack.GitRepository.EMPTY_PACK,
+ ]
+ assert packets == expected_packets
+
+
+def test_pre_pull_hook_fails_no_sideband(pygrack_app):
+ request = ''.join([
+ '0054want 74730d410fcb6603ace96f1dc55ea6196122532d ' +
+ 'multi_ack ofs-delta\n'
+ '0000',
+ '0009done\n',
+ ])
+ with mock.patch('vcsserver.hooks.git_pre_pull',
+ return_value=hooks.HookResponse(1, 'foo')):
+ response = pygrack_app.post(
+ '/git-upload-pack', params=request,
+ content_type='application/x-git-upload-pack')
+
+ assert response.body == pygrack.GitRepository.EMPTY_PACK
+
+
+def test_pull_has_hook_messages(pygrack_app):
+ request = ''.join([
+ '0054want 74730d410fcb6603ace96f1dc55ea6196122532d ' +
+ 'multi_ack side-band-64k ofs-delta\n'
+ '0000',
+ '0009done\n',
+ ])
+ with mock.patch('vcsserver.hooks.git_pre_pull',
+ return_value=hooks.HookResponse(0, 'foo')):
+ with mock.patch('vcsserver.hooks.git_post_pull',
+ return_value=hooks.HookResponse(1, 'bar')):
+ with mock.patch('vcsserver.subprocessio.SubprocessIOChunker',
+ return_value=['0008NAK\n0009subp\n0000']):
+ response = pygrack_app.post(
+ '/git-upload-pack', params=request,
+ content_type='application/x-git-upload-pack')
+
+ data = io.BytesIO(response.body)
+ proto = dulwich.protocol.Protocol(data.read, None)
+ packets = list(proto.read_pkt_seq())
+
+ assert packets == ['NAK\n', '\x02foo', 'subp\n', '\x02bar']
+
+
+def test_get_want_capabilities(pygrack_instance):
+ data = io.BytesIO(
+ '0054want 74730d410fcb6603ace96f1dc55ea6196122532d ' +
+ 'multi_ack side-band-64k ofs-delta\n00000009done\n')
+
+ request = webob.Request({
+ 'wsgi.input': data,
+ 'REQUEST_METHOD': 'POST',
+ 'webob.is_body_seekable': True
+ })
+
+ capabilities = pygrack_instance._get_want_capabilities(request)
+
+ assert capabilities == frozenset(
+ ('ofs-delta', 'multi_ack', 'side-band-64k'))
+ assert data.tell() == 0
+
+
+@pytest.mark.parametrize('data,capabilities,expected', [
+ ('foo', [], []),
+ ('', ['side-band-64k'], []),
+ ('', ['side-band'], []),
+ ('foo', ['side-band-64k'], ['0008\x02foo']),
+ ('foo', ['side-band'], ['0008\x02foo']),
+ ('f'*1000, ['side-band-64k'], ['03ed\x02' + 'f' * 1000]),
+ ('f'*1000, ['side-band'], ['03e8\x02' + 'f' * 995, '000a\x02fffff']),
+ ('f'*65520, ['side-band-64k'], ['fff0\x02' + 'f' * 65515, '000a\x02fffff']),
+ ('f'*65520, ['side-band'], ['03e8\x02' + 'f' * 995] * 65 + ['0352\x02' + 'f' * 845]),
+], ids=[
+ 'foo-empty',
+ 'empty-64k', 'empty',
+ 'foo-64k', 'foo',
+ 'f-1000-64k', 'f-1000',
+ 'f-65520-64k', 'f-65520'])
+def test_get_messages(pygrack_instance, data, capabilities, expected):
+ messages = pygrack_instance._get_messages(data, capabilities)
+
+ assert messages == expected
+
+
+@pytest.mark.parametrize('response,capabilities,pre_pull_messages,post_pull_messages', [
+ # Unexpected response
+ ('unexpected_response', ['side-band-64k'], 'foo', 'bar'),
+ # No sideband
+ ('no-sideband', [], 'foo', 'bar'),
+ # No messages
+ ('no-messages', ['side-band-64k'], '', ''),
+])
+def test_inject_messages_to_response_nothing_to_do(
+ pygrack_instance, response, capabilities, pre_pull_messages,
+ post_pull_messages):
+ new_response = pygrack_instance._inject_messages_to_response(
+ response, capabilities, pre_pull_messages, post_pull_messages)
+
+ assert new_response == response
+
+
+@pytest.mark.parametrize('capabilities', [
+ ['side-band'],
+ ['side-band-64k'],
+])
+def test_inject_messages_to_response_single_element(pygrack_instance,
+ capabilities):
+ response = ['0008NAK\n0009subp\n0000']
+ new_response = pygrack_instance._inject_messages_to_response(
+ response, capabilities, 'foo', 'bar')
+
+ expected_response = [
+ '0008NAK\n', '0008\x02foo', '0009subp\n', '0008\x02bar', '0000']
+
+ assert new_response == expected_response
+
+
+@pytest.mark.parametrize('capabilities', [
+ ['side-band'],
+ ['side-band-64k'],
+])
+def test_inject_messages_to_response_multi_element(pygrack_instance,
+ capabilities):
+ response = [
+ '0008NAK\n000asubp1\n', '000asubp2\n', '000asubp3\n', '000asubp4\n0000']
+ new_response = pygrack_instance._inject_messages_to_response(
+ response, capabilities, 'foo', 'bar')
+
+ expected_response = [
+ '0008NAK\n', '0008\x02foo', '000asubp1\n', '000asubp2\n', '000asubp3\n',
+ '000asubp4\n', '0008\x02bar', '0000'
+ ]
+
+ assert new_response == expected_response
+
+
+def test_build_failed_pre_pull_response_no_sideband(pygrack_instance):
+ response = pygrack_instance._build_failed_pre_pull_response([], 'foo')
+
+ assert response == [pygrack.GitRepository.EMPTY_PACK]
+
+
+@pytest.mark.parametrize('capabilities', [
+ ['side-band'],
+ ['side-band-64k'],
+ ['side-band-64k', 'no-progress'],
+])
+def test_build_failed_pre_pull_response(pygrack_instance, capabilities):
+ response = pygrack_instance._build_failed_pre_pull_response(
+ capabilities, 'foo')
+
+ expected_response = [
+ '0008NAK\n', '0008\x02foo', '0024\x02Pre pull hook failed: aborting\n',
+ '%04x\x01%s' % (len(pygrack.GitRepository.EMPTY_PACK) + 5,
+ pygrack.GitRepository.EMPTY_PACK),
+ '0000',
+ ]
+
+ assert response == expected_response
diff --git a/tests/test_scm_app.py b/tests/test_scm_app.py
new file mode 100644
--- /dev/null
+++ b/tests/test_scm_app.py
@@ -0,0 +1,86 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import os
+
+import mercurial.hg
+import mercurial.ui
+import mercurial.error
+import mock
+import pytest
+import webtest
+
+from vcsserver import scm_app
+
+
+def test_hg_does_not_accept_invalid_cmd(tmpdir):
+ repo = mercurial.hg.repository(mercurial.ui.ui(), str(tmpdir), create=True)
+ app = webtest.TestApp(scm_app.HgWeb(repo))
+
+ response = app.get('/repo?cmd=invalidcmd', expect_errors=True)
+
+ assert response.status_int == 400
+
+
+def test_create_hg_wsgi_app_requirement_error(tmpdir):
+ repo = mercurial.hg.repository(mercurial.ui.ui(), str(tmpdir), create=True)
+ config = (
+ ('paths', 'default', ''),
+ )
+ with mock.patch('vcsserver.scm_app.HgWeb') as hgweb_mock:
+ hgweb_mock.side_effect = mercurial.error.RequirementError()
+ with pytest.raises(Exception):
+ scm_app.create_hg_wsgi_app(str(tmpdir), repo, config)
+
+
+def test_git_returns_not_found(tmpdir):
+ app = webtest.TestApp(
+ scm_app.GitHandler(str(tmpdir), 'repo_name', 'git', False, {}))
+
+ response = app.get('/repo_name/inforefs?service=git-upload-pack',
+ expect_errors=True)
+
+ assert response.status_int == 404
+
+
+def test_git(tmpdir):
+ for dir_name in ('config', 'head', 'info', 'objects', 'refs'):
+ tmpdir.mkdir(dir_name)
+
+ app = webtest.TestApp(
+ scm_app.GitHandler(str(tmpdir), 'repo_name', 'git', False, {}))
+
+ # We set service to git-upload-packs to trigger a 403
+ response = app.get('/repo_name/inforefs?service=git-upload-packs',
+ expect_errors=True)
+
+ assert response.status_int == 403
+
+
+def test_git_fallbacks_to_git_folder(tmpdir):
+ tmpdir.mkdir('.git')
+ for dir_name in ('config', 'head', 'info', 'objects', 'refs'):
+ tmpdir.mkdir(os.path.join('.git', dir_name))
+
+ app = webtest.TestApp(
+ scm_app.GitHandler(str(tmpdir), 'repo_name', 'git', False, {}))
+
+ # We set service to git-upload-packs to trigger a 403
+ response = app.get('/repo_name/inforefs?service=git-upload-packs',
+ expect_errors=True)
+
+ assert response.status_int == 403
diff --git a/tests/test_server.py b/tests/test_server.py
new file mode 100644
--- /dev/null
+++ b/tests/test_server.py
@@ -0,0 +1,39 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import os
+
+import mock
+import pytest
+
+from vcsserver.server import VcsServer
+
+
+def test_provides_the_pid(server):
+ pid = server.get_pid()
+ assert pid == os.getpid()
+
+
+def test_allows_to_trigger_the_garbage_collector(server):
+ with mock.patch('gc.collect') as collect:
+ server.run_gc()
+ assert collect.called
+
+
+@pytest.fixture
+def server():
+ return VcsServer()
diff --git a/tests/test_subprocessio.py b/tests/test_subprocessio.py
new file mode 100644
--- /dev/null
+++ b/tests/test_subprocessio.py
@@ -0,0 +1,122 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import io
+import os
+import sys
+
+import pytest
+
+from vcsserver import subprocessio
+
+
+@pytest.fixture(scope='module')
+def environ():
+ """Delete coverage variables, as they make the tests fail."""
+ env = dict(os.environ)
+ for key in env.keys():
+ if key.startswith('COV_CORE_'):
+ del env[key]
+
+ return env
+
+
+def _get_python_args(script):
+ return [sys.executable, '-c',
+ 'import sys; import time; import shutil; ' + script]
+
+
+def test_raise_exception_on_non_zero_return_code(environ):
+ args = _get_python_args('sys.exit(1)')
+ with pytest.raises(EnvironmentError):
+ list(subprocessio.SubprocessIOChunker(args, shell=False, env=environ))
+
+
+def test_does_not_fail_on_non_zero_return_code(environ):
+ args = _get_python_args('sys.exit(1)')
+ output = ''.join(subprocessio.SubprocessIOChunker(
+ args, shell=False, fail_on_return_code=False, env=environ))
+
+ assert output == ''
+
+
+def test_raise_exception_on_stderr(environ):
+ args = _get_python_args('sys.stderr.write("X"); time.sleep(1);')
+ with pytest.raises(EnvironmentError) as excinfo:
+ list(subprocessio.SubprocessIOChunker(args, shell=False, env=environ))
+
+ assert 'exited due to an error:\nX' in str(excinfo.value)
+
+
+def test_does_not_fail_on_stderr(environ):
+ args = _get_python_args('sys.stderr.write("X"); time.sleep(1);')
+ output = ''.join(subprocessio.SubprocessIOChunker(
+ args, shell=False, fail_on_stderr=False, env=environ))
+
+ assert output == ''
+
+
+@pytest.mark.parametrize('size', [1, 10**5])
+def test_output_with_no_input(size, environ):
+ print type(environ)
+ data = 'X'
+ args = _get_python_args('sys.stdout.write("%s" * %d)' % (data, size))
+ output = ''.join(subprocessio.SubprocessIOChunker(
+ args, shell=False, env=environ))
+
+ assert output == data * size
+
+
+@pytest.mark.parametrize('size', [1, 10**5])
+def test_output_with_no_input_does_not_fail(size, environ):
+ data = 'X'
+ args = _get_python_args(
+ 'sys.stdout.write("%s" * %d); sys.exit(1)' % (data, size))
+ output = ''.join(subprocessio.SubprocessIOChunker(
+ args, shell=False, fail_on_return_code=False, env=environ))
+
+ print len(data * size), len(output)
+ assert output == data * size
+
+
+@pytest.mark.parametrize('size', [1, 10**5])
+def test_output_with_input(size, environ):
+ data = 'X' * size
+ inputstream = io.BytesIO(data)
+ # This acts like the cat command.
+ args = _get_python_args('shutil.copyfileobj(sys.stdin, sys.stdout)')
+ output = ''.join(subprocessio.SubprocessIOChunker(
+ args, shell=False, inputstream=inputstream, env=environ))
+
+ print len(data), len(output)
+ assert output == data
+
+
+@pytest.mark.parametrize('size', [1, 10**5])
+def test_output_with_input_skipping_iterator(size, environ):
+ data = 'X' * size
+ inputstream = io.BytesIO(data)
+ # This acts like the cat command.
+ args = _get_python_args('shutil.copyfileobj(sys.stdin, sys.stdout)')
+
+ # Note: assigning the chunker makes sure that it is not deleted too early
+ chunker = subprocessio.SubprocessIOChunker(
+ args, shell=False, inputstream=inputstream, env=environ)
+ output = ''.join(chunker.output)
+
+ print len(data), len(output)
+ assert output == data
diff --git a/tests/test_svn.py b/tests/test_svn.py
new file mode 100644
--- /dev/null
+++ b/tests/test_svn.py
@@ -0,0 +1,67 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import io
+import mock
+import pytest
+import sys
+
+
+class MockPopen(object):
+ def __init__(self, stderr):
+ self.stdout = io.BytesIO('')
+ self.stderr = io.BytesIO(stderr)
+ self.returncode = 1
+
+ def wait(self):
+ pass
+
+
+INVALID_CERTIFICATE_STDERR = '\n'.join([
+ 'svnrdump: E230001: Unable to connect to a repository at URL url',
+ 'svnrdump: E230001: Server SSL certificate verification failed: issuer is not trusted',
+])
+
+
+@pytest.mark.parametrize('stderr,expected_reason', [
+ (INVALID_CERTIFICATE_STDERR, 'INVALID_CERTIFICATE'),
+ ('svnrdump: E123456', 'UNKNOWN'),
+])
+@pytest.mark.xfail(sys.platform == "cygwin",
+ reason="SVN not packaged for Cygwin")
+def test_import_remote_repository_certificate_error(stderr, expected_reason):
+ from vcsserver import svn
+
+ remote = svn.SvnRemote(None)
+ remote.is_path_valid_repository = lambda wire, path: True
+
+ with mock.patch('subprocess.Popen',
+ return_value=MockPopen(stderr)):
+ with pytest.raises(Exception) as excinfo:
+ remote.import_remote_repository({'path': 'path'}, 'url')
+
+ expected_error_args = (
+ 'Failed to dump the remote repository from url.',
+ expected_reason)
+
+ assert excinfo.value.args == expected_error_args
+
+
+def test_svn_libraries_can_be_imported():
+ import svn
+ import svn.client
+ assert svn.client is not None
diff --git a/tests/test_vcsserver.py b/tests/test_vcsserver.py
new file mode 100644
--- /dev/null
+++ b/tests/test_vcsserver.py
@@ -0,0 +1,132 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import subprocess
+import StringIO
+import time
+
+import pytest
+
+from fixture import TestINI
+
+
+@pytest.mark.parametrize("arguments, expected_texts", [
+ (['--threadpool=192'], [
+ 'threadpool_size: 192',
+ 'worker pool of size 192 created',
+ 'Threadpool size set to 192']),
+ (['--locale=fake'], [
+ 'Cannot set locale, not configuring the locale system']),
+ (['--timeout=5'], [
+ 'Timeout for RPC calls set to 5.0 seconds']),
+ (['--log-level=info'], [
+ 'log_level: info']),
+ (['--port={port}'], [
+ 'port: {port}',
+ 'created daemon on localhost:{port}']),
+ (['--host=127.0.0.1', '--port={port}'], [
+ 'port: {port}',
+ 'host: 127.0.0.1',
+ 'created daemon on 127.0.0.1:{port}']),
+ (['--config=/bad/file'], ['OSError: File /bad/file does not exist']),
+])
+def test_vcsserver_calls(arguments, expected_texts, vcsserver_port):
+ port_argument = '--port={port}'
+ if port_argument not in arguments:
+ arguments.append(port_argument)
+ arguments = _replace_port(arguments, vcsserver_port)
+ expected_texts = _replace_port(expected_texts, vcsserver_port)
+ output = call_vcs_server_with_arguments(arguments)
+ for text in expected_texts:
+ assert text in output
+
+
+def _replace_port(values, port):
+ return [value.format(port=port) for value in values]
+
+
+def test_vcsserver_with_config(vcsserver_port):
+ ini_def = [
+ {'DEFAULT': {'host': '127.0.0.1'}},
+ {'DEFAULT': {'threadpool_size': '111'}},
+ {'DEFAULT': {'port': vcsserver_port}},
+ ]
+
+ with TestINI('test.ini', ini_def) as new_test_ini_path:
+ output = call_vcs_server_with_arguments(
+ ['--config=' + new_test_ini_path])
+
+ expected_texts = [
+ 'host: 127.0.0.1',
+ 'Threadpool size set to 111',
+ ]
+ for text in expected_texts:
+ assert text in output
+
+
+def test_vcsserver_with_config_cli_overwrite(vcsserver_port):
+ ini_def = [
+ {'DEFAULT': {'host': '127.0.0.1'}},
+ {'DEFAULT': {'port': vcsserver_port}},
+ {'DEFAULT': {'threadpool_size': '111'}},
+ {'DEFAULT': {'timeout': '0'}},
+ ]
+ with TestINI('test.ini', ini_def) as new_test_ini_path:
+ output = call_vcs_server_with_arguments([
+ '--config=' + new_test_ini_path,
+ '--host=128.0.0.1',
+ '--threadpool=256',
+ '--timeout=5'])
+ expected_texts = [
+ 'host: 128.0.0.1',
+ 'Threadpool size set to 256',
+ 'Timeout for RPC calls set to 5.0 seconds',
+ ]
+ for text in expected_texts:
+ assert text in output
+
+
+def call_vcs_server_with_arguments(args):
+ vcs = subprocess.Popen(
+ ["vcsserver"] + args,
+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+
+ output = read_output_until(
+ "Starting vcsserver.main", vcs.stdout)
+ vcs.terminate()
+ return output
+
+
+def call_vcs_server_with_non_existing_config_file(args):
+ vcs = subprocess.Popen(
+ ["vcsserver", "--config=/tmp/bad"] + args,
+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ output = read_output_until(
+ "Starting vcsserver.main", vcs.stdout)
+ vcs.terminate()
+ return output
+
+
+def read_output_until(expected, source, timeout=5):
+ ts = time.time()
+ buf = StringIO.StringIO()
+ while time.time() - ts < timeout:
+ line = source.readline()
+ buf.write(line)
+ if expected in line:
+ break
+ return buf.getvalue()
diff --git a/tests/test_wsgi_app_caller.py b/tests/test_wsgi_app_caller.py
new file mode 100644
--- /dev/null
+++ b/tests/test_wsgi_app_caller.py
@@ -0,0 +1,96 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import wsgiref.simple_server
+import wsgiref.validate
+
+from vcsserver import wsgi_app_caller
+
+
+# pylint: disable=protected-access,too-many-public-methods
+
+
+@wsgiref.validate.validator
+def demo_app(environ, start_response):
+ """WSGI app used for testing."""
+ data = [
+ 'Hello World!\n',
+ 'input_data=%s\n' % environ['wsgi.input'].read(),
+ ]
+ for key, value in sorted(environ.items()):
+ data.append('%s=%s\n' % (key, value))
+
+ write = start_response("200 OK", [('Content-Type', 'text/plain')])
+ write('Old school write method\n')
+ write('***********************\n')
+ return data
+
+
+BASE_ENVIRON = {
+ 'REQUEST_METHOD': 'GET',
+ 'SERVER_NAME': 'localhost',
+ 'SERVER_PORT': '80',
+ 'SCRIPT_NAME': '',
+ 'PATH_INFO': '/',
+ 'QUERY_STRING': '',
+ 'foo.var': 'bla',
+}
+
+
+def test_complete_environ():
+ environ = dict(BASE_ENVIRON)
+ data = "data"
+ wsgi_app_caller._complete_environ(environ, data)
+ wsgiref.validate.check_environ(environ)
+
+ assert data == environ['wsgi.input'].read()
+
+
+def test_start_response():
+ start_response = wsgi_app_caller._StartResponse()
+ status = '200 OK'
+ headers = [('Content-Type', 'text/plain')]
+ start_response(status, headers)
+
+ assert status == start_response.status
+ assert headers == start_response.headers
+
+
+def test_start_response_with_error():
+ start_response = wsgi_app_caller._StartResponse()
+ status = '500 Internal Server Error'
+ headers = [('Content-Type', 'text/plain')]
+ start_response(status, headers, (None, None, None))
+
+ assert status == start_response.status
+ assert headers == start_response.headers
+
+
+def test_wsgi_app_caller():
+ caller = wsgi_app_caller.WSGIAppCaller(demo_app)
+ environ = dict(BASE_ENVIRON)
+ input_data = 'some text'
+ responses, status, headers = caller.handle(environ, input_data)
+ response = ''.join(responses)
+
+ assert status == '200 OK'
+ assert headers == [('Content-Type', 'text/plain')]
+ assert response.startswith(
+ 'Old school write method\n***********************\n')
+ assert 'Hello World!\n' in response
+ assert 'foo.var=bla\n' in response
+ assert 'input_data=%s\n' % input_data in response
diff --git a/vcsserver/VERSION b/vcsserver/VERSION
new file mode 100644
--- /dev/null
+++ b/vcsserver/VERSION
@@ -0,0 +1,1 @@
+4.0.0
\ No newline at end of file
diff --git a/vcsserver/__init__.py b/vcsserver/__init__.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/__init__.py
@@ -0,0 +1,21 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import pkgutil
+
+
+__version__ = pkgutil.get_data('vcsserver', 'VERSION').strip()
diff --git a/vcsserver/base.py b/vcsserver/base.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/base.py
@@ -0,0 +1,71 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import logging
+
+
+log = logging.getLogger(__name__)
+
+
+class RepoFactory(object):
+ """
+ Utility to create instances of repository
+
+ It provides internal caching of the `repo` object based on
+ the :term:`call context`.
+ """
+
+ def __init__(self, repo_cache):
+ self._cache = repo_cache
+
+ def _create_config(self, path, config):
+ config = {}
+ return config
+
+ def _create_repo(self, wire, create):
+ raise NotImplementedError()
+
+ def repo(self, wire, create=False):
+ """
+ Get a repository instance for the given path.
+
+ Uses internally the low level beaker API since the decorators introduce
+ significant overhead.
+ """
+ def create_new_repo():
+ return self._create_repo(wire, create)
+
+ return self._repo(wire, create_new_repo)
+
+ def _repo(self, wire, createfunc):
+ context = wire.get('context', None)
+ cache = wire.get('cache', True)
+ log.debug(
+ 'GET %s@%s with cache:%s. Context: %s',
+ self.__class__.__name__, wire['path'], cache, context)
+
+ if context and cache:
+ cache_key = (context, wire['path'])
+ log.debug(
+ 'FETCH %s@%s repo object from cache. Context: %s',
+ self.__class__.__name__, wire['path'], context)
+ return self._cache.get(key=cache_key, createfunc=createfunc)
+ else:
+ log.debug(
+ 'INIT %s@%s repo object based on wire %s. Context: %s',
+ self.__class__.__name__, wire['path'], wire, context)
+ return createfunc()
diff --git a/vcsserver/echo_stub/__init__.py b/vcsserver/echo_stub/__init__.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/echo_stub/__init__.py
@@ -0,0 +1,8 @@
+"""
+Provides a stub implementation for VCS operations.
+
+Intended usage is to help in performance measurements. The basic idea is to
+implement an `EchoApp` which sends back what it gets. Based on a configuration
+parameter this app can be activated, so that it replaced the endpoints for Git
+and Mercurial.
+"""
diff --git a/vcsserver/echo_stub/echo_app.py b/vcsserver/echo_stub/echo_app.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/echo_stub/echo_app.py
@@ -0,0 +1,34 @@
+"""
+Implementation of :class:`EchoApp`.
+
+This WSGI application will just echo back the data which it recieves.
+"""
+
+import logging
+
+
+log = logging.getLogger(__name__)
+
+
+class EchoApp(object):
+
+ def __init__(self, repo_path, repo_name, config):
+ self._repo_path = repo_path
+ log.info("EchoApp initialized for %s", repo_path)
+
+ def __call__(self, environ, start_response):
+ log.debug("EchoApp called for %s", self._repo_path)
+ log.debug("Content-Length: %s", environ.get('CONTENT_LENGTH'))
+ environ['wsgi.input'].read()
+ status = '200 OK'
+ headers = []
+ start_response(status, headers)
+ return ["ECHO"]
+
+
+def create_app():
+ """
+ Allows to run this app directly in a WSGI server.
+ """
+ stub_config = {}
+ return EchoApp('stub_path', 'stub_name', stub_config)
diff --git a/vcsserver/echo_stub/remote_wsgi.py b/vcsserver/echo_stub/remote_wsgi.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/echo_stub/remote_wsgi.py
@@ -0,0 +1,45 @@
+"""
+Provides the same API as :mod:`remote_wsgi`.
+
+Uses the `EchoApp` instead of real implementations.
+"""
+
+import logging
+
+from .echo_app import EchoApp
+from vcsserver import wsgi_app_caller
+
+
+log = logging.getLogger(__name__)
+
+
+class GitRemoteWsgi(object):
+ def handle(self, environ, input_data, *args, **kwargs):
+ app = wsgi_app_caller.WSGIAppCaller(
+ create_echo_wsgi_app(*args, **kwargs))
+
+ return app.handle(environ, input_data)
+
+
+class HgRemoteWsgi(object):
+ def handle(self, environ, input_data, *args, **kwargs):
+ app = wsgi_app_caller.WSGIAppCaller(
+ create_echo_wsgi_app(*args, **kwargs))
+
+ return app.handle(environ, input_data)
+
+
+def create_echo_wsgi_app(repo_path, repo_name, config):
+ log.debug("Creating EchoApp WSGI application")
+
+ _assert_valid_config(config)
+
+ # Remaining items are forwarded to have the extras available
+ return EchoApp(repo_path, repo_name, config=config)
+
+
+def _assert_valid_config(config):
+ config = config.copy()
+
+ # This is what git needs from config at this stage
+ config.pop('git_update_server_info')
diff --git a/vcsserver/exceptions.py b/vcsserver/exceptions.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/exceptions.py
@@ -0,0 +1,56 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+"""
+Special exception handling over the wire.
+
+Since we cannot assume that our client is able to import our exception classes,
+this module provides a "wrapping" mechanism to raise plain exceptions
+which contain an extra attribute `_vcs_kind` to allow a client to distinguish
+different error conditions.
+"""
+
+import functools
+
+
+def _make_exception(kind, *args):
+ """
+ Prepares a base `Exception` instance to be sent over the wire.
+
+ To give our caller a hint what this is about, it will attach an attribute
+ `_vcs_kind` to the exception.
+ """
+ exc = Exception(*args)
+ exc._vcs_kind = kind
+ return exc
+
+
+AbortException = functools.partial(_make_exception, 'abort')
+
+ArchiveException = functools.partial(_make_exception, 'archive')
+
+LookupException = functools.partial(_make_exception, 'lookup')
+
+VcsException = functools.partial(_make_exception, 'error')
+
+RepositoryLockedException = functools.partial(_make_exception, 'repo_locked')
+
+RequirementException = functools.partial(_make_exception, 'requirement')
+
+UnhandledException = functools.partial(_make_exception, 'unhandled')
+
+URLError = functools.partial(_make_exception, 'url_error')
diff --git a/vcsserver/git.py b/vcsserver/git.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/git.py
@@ -0,0 +1,588 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import logging
+import os
+import posixpath as vcspath
+import re
+import stat
+import urllib
+import urllib2
+from functools import wraps
+
+from dulwich import index, objects
+from dulwich.client import HttpGitClient, LocalGitClient
+from dulwich.errors import (
+ NotGitRepository, ChecksumMismatch, WrongObjectException,
+ MissingCommitError, ObjectMissing, HangupException,
+ UnexpectedCommandError)
+from dulwich.repo import Repo as DulwichRepo, Tag
+from dulwich.server import update_server_info
+
+from vcsserver import exceptions, settings, subprocessio
+from vcsserver.utils import safe_str
+from vcsserver.base import RepoFactory
+from vcsserver.hgcompat import (
+ hg_url, httpbasicauthhandler, httpdigestauthhandler)
+
+
+DIR_STAT = stat.S_IFDIR
+FILE_MODE = stat.S_IFMT
+GIT_LINK = objects.S_IFGITLINK
+
+log = logging.getLogger(__name__)
+
+
+def reraise_safe_exceptions(func):
+ """Converts Dulwich exceptions to something neutral."""
+ @wraps(func)
+ def wrapper(*args, **kwargs):
+ try:
+ return func(*args, **kwargs)
+ except (ChecksumMismatch, WrongObjectException, MissingCommitError,
+ ObjectMissing) as e:
+ raise exceptions.LookupException(e.message)
+ except (HangupException, UnexpectedCommandError) as e:
+ raise exceptions.VcsException(e.message)
+ return wrapper
+
+
+class Repo(DulwichRepo):
+ """
+ A wrapper for dulwich Repo class.
+
+ Since dulwich is sometimes keeping .idx file descriptors open, it leads to
+ "Too many open files" error. We need to close all opened file descriptors
+ once the repo object is destroyed.
+
+ TODO: mikhail: please check if we need this wrapper after updating dulwich
+ to 0.12.0 +
+ """
+ def __del__(self):
+ if hasattr(self, 'object_store'):
+ self.close()
+
+
+class GitFactory(RepoFactory):
+
+ def _create_repo(self, wire, create):
+ repo_path = str_to_dulwich(wire['path'])
+ return Repo(repo_path)
+
+
+class GitRemote(object):
+
+ def __init__(self, factory):
+ self._factory = factory
+
+ self._bulk_methods = {
+ "author": self.commit_attribute,
+ "date": self.get_object_attrs,
+ "message": self.commit_attribute,
+ "parents": self.commit_attribute,
+ "_commit": self.revision,
+ }
+
+ def _assign_ref(self, wire, ref, commit_id):
+ repo = self._factory.repo(wire)
+ repo[ref] = commit_id
+
+ @reraise_safe_exceptions
+ def add_object(self, wire, content):
+ repo = self._factory.repo(wire)
+ blob = objects.Blob()
+ blob.set_raw_string(content)
+ repo.object_store.add_object(blob)
+ return blob.id
+
+ @reraise_safe_exceptions
+ def assert_correct_path(self, wire):
+ try:
+ self._factory.repo(wire)
+ except NotGitRepository as e:
+ # Exception can contain unicode which we convert
+ raise exceptions.AbortException(repr(e))
+
+ @reraise_safe_exceptions
+ def bare(self, wire):
+ repo = self._factory.repo(wire)
+ return repo.bare
+
+ @reraise_safe_exceptions
+ def blob_as_pretty_string(self, wire, sha):
+ repo = self._factory.repo(wire)
+ return repo[sha].as_pretty_string()
+
+ @reraise_safe_exceptions
+ def blob_raw_length(self, wire, sha):
+ repo = self._factory.repo(wire)
+ blob = repo[sha]
+ return blob.raw_length()
+
+ @reraise_safe_exceptions
+ def bulk_request(self, wire, rev, pre_load):
+ result = {}
+ for attr in pre_load:
+ try:
+ method = self._bulk_methods[attr]
+ args = [wire, rev]
+ if attr == "date":
+ args.extend(["commit_time", "commit_timezone"])
+ elif attr in ["author", "message", "parents"]:
+ args.append(attr)
+ result[attr] = method(*args)
+ except KeyError:
+ raise exceptions.VcsException(
+ "Unknown bulk attribute: %s" % attr)
+ return result
+
+ def _build_opener(self, url):
+ handlers = []
+ url_obj = hg_url(url)
+ _, authinfo = url_obj.authinfo()
+
+ if authinfo:
+ # create a password manager
+ passmgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
+ passmgr.add_password(*authinfo)
+
+ handlers.extend((httpbasicauthhandler(passmgr),
+ httpdigestauthhandler(passmgr)))
+
+ return urllib2.build_opener(*handlers)
+
+ @reraise_safe_exceptions
+ def check_url(self, url, config):
+ url_obj = hg_url(url)
+ test_uri, _ = url_obj.authinfo()
+ url_obj.passwd = '*****'
+ cleaned_uri = str(url_obj)
+
+ if not test_uri.endswith('info/refs'):
+ test_uri = test_uri.rstrip('/') + '/info/refs'
+
+ o = self._build_opener(url)
+ o.addheaders = [('User-Agent', 'git/1.7.8.0')] # fake some git
+
+ q = {"service": 'git-upload-pack'}
+ qs = '?%s' % urllib.urlencode(q)
+ cu = "%s%s" % (test_uri, qs)
+ req = urllib2.Request(cu, None, {})
+
+ try:
+ resp = o.open(req)
+ if resp.code != 200:
+ raise Exception('Return Code is not 200')
+ except Exception as e:
+ # means it cannot be cloned
+ raise urllib2.URLError("[%s] org_exc: %s" % (cleaned_uri, e))
+
+ # now detect if it's proper git repo
+ gitdata = resp.read()
+ if 'service=git-upload-pack' in gitdata:
+ pass
+ elif re.findall(r'[0-9a-fA-F]{40}\s+refs', gitdata):
+ # old style git can return some other format !
+ pass
+ else:
+ raise urllib2.URLError(
+ "url [%s] does not look like an git" % (cleaned_uri,))
+
+ return True
+
+ @reraise_safe_exceptions
+ def clone(self, wire, url, deferred, valid_refs, update_after_clone):
+ remote_refs = self.fetch(wire, url, apply_refs=False)
+ repo = self._factory.repo(wire)
+ if isinstance(valid_refs, list):
+ valid_refs = tuple(valid_refs)
+
+ for k in remote_refs:
+ # only parse heads/tags and skip so called deferred tags
+ if k.startswith(valid_refs) and not k.endswith(deferred):
+ repo[k] = remote_refs[k]
+
+ if update_after_clone:
+ # we want to checkout HEAD
+ repo["HEAD"] = remote_refs["HEAD"]
+ index.build_index_from_tree(repo.path, repo.index_path(),
+ repo.object_store, repo["HEAD"].tree)
+
+ # TODO: this is quite complex, check if that can be simplified
+ @reraise_safe_exceptions
+ def commit(self, wire, commit_data, branch, commit_tree, updated, removed):
+ repo = self._factory.repo(wire)
+ object_store = repo.object_store
+
+ # Create tree and populates it with blobs
+ commit_tree = commit_tree and repo[commit_tree] or objects.Tree()
+
+ for node in updated:
+ # Compute subdirs if needed
+ dirpath, nodename = vcspath.split(node['path'])
+ dirnames = map(safe_str, dirpath and dirpath.split('/') or [])
+ parent = commit_tree
+ ancestors = [('', parent)]
+
+ # Tries to dig for the deepest existing tree
+ while dirnames:
+ curdir = dirnames.pop(0)
+ try:
+ dir_id = parent[curdir][1]
+ except KeyError:
+ # put curdir back into dirnames and stops
+ dirnames.insert(0, curdir)
+ break
+ else:
+ # If found, updates parent
+ parent = repo[dir_id]
+ ancestors.append((curdir, parent))
+ # Now parent is deepest existing tree and we need to create
+ # subtrees for dirnames (in reverse order)
+ # [this only applies for nodes from added]
+ new_trees = []
+
+ blob = objects.Blob.from_string(node['content'])
+
+ if dirnames:
+ # If there are trees which should be created we need to build
+ # them now (in reverse order)
+ reversed_dirnames = list(reversed(dirnames))
+ curtree = objects.Tree()
+ curtree[node['node_path']] = node['mode'], blob.id
+ new_trees.append(curtree)
+ for dirname in reversed_dirnames[:-1]:
+ newtree = objects.Tree()
+ newtree[dirname] = (DIR_STAT, curtree.id)
+ new_trees.append(newtree)
+ curtree = newtree
+ parent[reversed_dirnames[-1]] = (DIR_STAT, curtree.id)
+ else:
+ parent.add(
+ name=node['node_path'], mode=node['mode'], hexsha=blob.id)
+
+ new_trees.append(parent)
+ # Update ancestors
+ reversed_ancestors = reversed(
+ [(a[1], b[1], b[0]) for a, b in zip(ancestors, ancestors[1:])])
+ for parent, tree, path in reversed_ancestors:
+ parent[path] = (DIR_STAT, tree.id)
+ object_store.add_object(tree)
+
+ object_store.add_object(blob)
+ for tree in new_trees:
+ object_store.add_object(tree)
+
+ for node_path in removed:
+ paths = node_path.split('/')
+ tree = commit_tree
+ trees = [tree]
+ # Traverse deep into the forest...
+ for path in paths:
+ try:
+ obj = repo[tree[path][1]]
+ if isinstance(obj, objects.Tree):
+ trees.append(obj)
+ tree = obj
+ except KeyError:
+ break
+ # Cut down the blob and all rotten trees on the way back...
+ for path, tree in reversed(zip(paths, trees)):
+ del tree[path]
+ if tree:
+ # This tree still has elements - don't remove it or any
+ # of it's parents
+ break
+
+ object_store.add_object(commit_tree)
+
+ # Create commit
+ commit = objects.Commit()
+ commit.tree = commit_tree.id
+ for k, v in commit_data.iteritems():
+ setattr(commit, k, v)
+ object_store.add_object(commit)
+
+ ref = 'refs/heads/%s' % branch
+ repo.refs[ref] = commit.id
+
+ return commit.id
+
+ @reraise_safe_exceptions
+ def fetch(self, wire, url, apply_refs=True, refs=None):
+ if url != 'default' and '://' not in url:
+ client = LocalGitClient(url)
+ else:
+ url_obj = hg_url(url)
+ o = self._build_opener(url)
+ url, _ = url_obj.authinfo()
+ client = HttpGitClient(base_url=url, opener=o)
+ repo = self._factory.repo(wire)
+
+ determine_wants = repo.object_store.determine_wants_all
+ if refs:
+ def determine_wants_requested(references):
+ return [references[r] for r in references if r in refs]
+ determine_wants = determine_wants_requested
+
+ try:
+ remote_refs = client.fetch(
+ path=url, target=repo, determine_wants=determine_wants)
+ except NotGitRepository:
+ log.warning(
+ 'Trying to fetch from "%s" failed, not a Git repository.', url)
+ raise exceptions.AbortException()
+
+ # mikhail: client.fetch() returns all the remote refs, but fetches only
+ # refs filtered by `determine_wants` function. We need to filter result
+ # as well
+ if refs:
+ remote_refs = {k: remote_refs[k] for k in remote_refs if k in refs}
+
+ if apply_refs:
+ # TODO: johbo: Needs proper test coverage with a git repository
+ # that contains a tag object, so that we would end up with
+ # a peeled ref at this point.
+ PEELED_REF_MARKER = '^{}'
+ for k in remote_refs:
+ if k.endswith(PEELED_REF_MARKER):
+ log.info("Skipping peeled reference %s", k)
+ continue
+ repo[k] = remote_refs[k]
+
+ if refs:
+ # mikhail: explicitly set the head to the last ref.
+ repo['HEAD'] = remote_refs[refs[-1]]
+
+ # TODO: mikhail: should we return remote_refs here to be
+ # consistent?
+ else:
+ return remote_refs
+
+ @reraise_safe_exceptions
+ def get_remote_refs(self, wire, url):
+ repo = Repo(url)
+ return repo.get_refs()
+
+ @reraise_safe_exceptions
+ def get_description(self, wire):
+ repo = self._factory.repo(wire)
+ return repo.get_description()
+
+ @reraise_safe_exceptions
+ def get_file_history(self, wire, file_path, commit_id, limit):
+ repo = self._factory.repo(wire)
+ include = [commit_id]
+ paths = [file_path]
+
+ walker = repo.get_walker(include, paths=paths, max_entries=limit)
+ return [x.commit.id for x in walker]
+
+ @reraise_safe_exceptions
+ def get_missing_revs(self, wire, rev1, rev2, path2):
+ repo = self._factory.repo(wire)
+ LocalGitClient(thin_packs=False).fetch(path2, repo)
+
+ wire_remote = wire.copy()
+ wire_remote['path'] = path2
+ repo_remote = self._factory.repo(wire_remote)
+ LocalGitClient(thin_packs=False).fetch(wire["path"], repo_remote)
+
+ revs = [
+ x.commit.id
+ for x in repo_remote.get_walker(include=[rev2], exclude=[rev1])]
+ return revs
+
+ @reraise_safe_exceptions
+ def get_object(self, wire, sha):
+ repo = self._factory.repo(wire)
+ obj = repo.get_object(sha)
+ commit_id = obj.id
+
+ if isinstance(obj, Tag):
+ commit_id = obj.object[1]
+
+ return {
+ 'id': obj.id,
+ 'type': obj.type_name,
+ 'commit_id': commit_id
+ }
+
+ @reraise_safe_exceptions
+ def get_object_attrs(self, wire, sha, *attrs):
+ repo = self._factory.repo(wire)
+ obj = repo.get_object(sha)
+ return list(getattr(obj, a) for a in attrs)
+
+ @reraise_safe_exceptions
+ def get_refs(self, wire, keys=None):
+ # FIXME(skreft): this method is affected by bug
+ # http://bugs.rhodecode.com/issues/298.
+ # Basically, it will overwrite previously computed references if
+ # there's another one with the same name and given the order of
+ # repo.get_refs() is not guaranteed, the output of this method is not
+ # stable either.
+ repo = self._factory.repo(wire)
+ refs = repo.get_refs()
+ if keys is None:
+ return refs
+
+ _refs = {}
+ for ref, sha in refs.iteritems():
+ for k, type_ in keys:
+ if ref.startswith(k):
+ _key = ref[len(k):]
+ if type_ == 'T':
+ sha = repo.get_object(sha).id
+ _refs[_key] = [sha, type_]
+ break
+ return _refs
+
+ @reraise_safe_exceptions
+ def get_refs_path(self, wire):
+ repo = self._factory.repo(wire)
+ return repo.refs.path
+
+ @reraise_safe_exceptions
+ def head(self, wire):
+ repo = self._factory.repo(wire)
+ return repo.head()
+
+ @reraise_safe_exceptions
+ def init(self, wire):
+ repo_path = str_to_dulwich(wire['path'])
+ self.repo = Repo.init(repo_path)
+
+ @reraise_safe_exceptions
+ def init_bare(self, wire):
+ repo_path = str_to_dulwich(wire['path'])
+ self.repo = Repo.init_bare(repo_path)
+
+ @reraise_safe_exceptions
+ def revision(self, wire, rev):
+ repo = self._factory.repo(wire)
+ obj = repo[rev]
+ obj_data = {
+ 'id': obj.id,
+ }
+ try:
+ obj_data['tree'] = obj.tree
+ except AttributeError:
+ pass
+ return obj_data
+
+ @reraise_safe_exceptions
+ def commit_attribute(self, wire, rev, attr):
+ repo = self._factory.repo(wire)
+ obj = repo[rev]
+ return getattr(obj, attr)
+
+ @reraise_safe_exceptions
+ def set_refs(self, wire, key, value):
+ repo = self._factory.repo(wire)
+ repo.refs[key] = value
+
+ @reraise_safe_exceptions
+ def remove_ref(self, wire, key):
+ repo = self._factory.repo(wire)
+ del repo.refs[key]
+
+ @reraise_safe_exceptions
+ def tree_changes(self, wire, source_id, target_id):
+ repo = self._factory.repo(wire)
+ source = repo[source_id].tree if source_id else None
+ target = repo[target_id].tree
+ result = repo.object_store.tree_changes(source, target)
+ return list(result)
+
+ @reraise_safe_exceptions
+ def tree_items(self, wire, tree_id):
+ repo = self._factory.repo(wire)
+ tree = repo[tree_id]
+
+ result = []
+ for item in tree.iteritems():
+ item_sha = item.sha
+ item_mode = item.mode
+
+ if FILE_MODE(item_mode) == GIT_LINK:
+ item_type = "link"
+ else:
+ item_type = repo[item_sha].type_name
+
+ result.append((item.path, item_mode, item_sha, item_type))
+ return result
+
+ @reraise_safe_exceptions
+ def update_server_info(self, wire):
+ repo = self._factory.repo(wire)
+ update_server_info(repo)
+
+ @reraise_safe_exceptions
+ def discover_git_version(self):
+ stdout, _ = self.run_git_command(
+ {}, ['--version'], _bare=True, _safe=True)
+ return stdout
+
+ @reraise_safe_exceptions
+ def run_git_command(self, wire, cmd, **opts):
+ path = wire.get('path', None)
+
+ if path and os.path.isdir(path):
+ opts['cwd'] = path
+
+ if '_bare' in opts:
+ _copts = []
+ del opts['_bare']
+ else:
+ _copts = ['-c', 'core.quotepath=false', ]
+ safe_call = False
+ if '_safe' in opts:
+ # no exc on failure
+ del opts['_safe']
+ safe_call = True
+
+ gitenv = os.environ.copy()
+ gitenv.update(opts.pop('extra_env', {}))
+ # need to clean fix GIT_DIR !
+ if 'GIT_DIR' in gitenv:
+ del gitenv['GIT_DIR']
+ gitenv['GIT_CONFIG_NOGLOBAL'] = '1'
+
+ cmd = [settings.GIT_EXECUTABLE] + _copts + cmd
+
+ try:
+ _opts = {'env': gitenv, 'shell': False}
+ _opts.update(opts)
+ p = subprocessio.SubprocessIOChunker(cmd, **_opts)
+
+ return ''.join(p), ''.join(p.error)
+ except (EnvironmentError, OSError) as err:
+ tb_err = ("Couldn't run git command (%s).\n"
+ "Original error was:%s\n" % (cmd, err))
+ log.exception(tb_err)
+ if safe_call:
+ return '', err
+ else:
+ raise exceptions.VcsException(tb_err)
+
+
+def str_to_dulwich(value):
+ """
+ Dulwich 0.10.1a requires `unicode` objects to be passed in.
+ """
+ return value.decode(settings.WIRE_ENCODING)
diff --git a/vcsserver/hg.py b/vcsserver/hg.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/hg.py
@@ -0,0 +1,692 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import io
+import logging
+import stat
+import sys
+import urllib
+import urllib2
+
+from hgext import largefiles, rebase
+from hgext.strip import strip as hgext_strip
+from mercurial import commands
+from mercurial import unionrepo
+
+from vcsserver import exceptions
+from vcsserver.base import RepoFactory
+from vcsserver.hgcompat import (
+ archival, bin, clone, config as hgconfig, diffopts, hex, hg_url,
+ httpbasicauthhandler, httpdigestauthhandler, httppeer, localrepository,
+ match, memctx, exchange, memfilectx, nullrev, patch, peer, revrange, ui,
+ Abort, LookupError, RepoError, RepoLookupError, InterventionRequired,
+ RequirementError)
+
+log = logging.getLogger(__name__)
+
+
+def make_ui_from_config(repo_config):
+ baseui = ui.ui()
+
+ # clean the baseui object
+ baseui._ocfg = hgconfig.config()
+ baseui._ucfg = hgconfig.config()
+ baseui._tcfg = hgconfig.config()
+
+ for section, option, value in repo_config:
+ baseui.setconfig(section, option, value)
+
+ # make our hgweb quiet so it doesn't print output
+ baseui.setconfig('ui', 'quiet', 'true')
+
+ # force mercurial to only use 1 thread, otherwise it may try to set a
+ # signal in a non-main thread, thus generating a ValueError.
+ baseui.setconfig('worker', 'numcpus', 1)
+
+ return baseui
+
+
+def reraise_safe_exceptions(func):
+ """Decorator for converting mercurial exceptions to something neutral."""
+ def wrapper(*args, **kwargs):
+ try:
+ return func(*args, **kwargs)
+ except (Abort, InterventionRequired):
+ raise_from_original(exceptions.AbortException)
+ except RepoLookupError:
+ raise_from_original(exceptions.LookupException)
+ except RequirementError:
+ raise_from_original(exceptions.RequirementException)
+ except RepoError:
+ raise_from_original(exceptions.VcsException)
+ except LookupError:
+ raise_from_original(exceptions.LookupException)
+ except Exception as e:
+ if not hasattr(e, '_vcs_kind'):
+ log.exception("Unhandled exception in hg remote call")
+ raise_from_original(exceptions.UnhandledException)
+ raise
+ return wrapper
+
+
+def raise_from_original(new_type):
+ """
+ Raise a new exception type with original args and traceback.
+ """
+ _, original, traceback = sys.exc_info()
+ try:
+ raise new_type(*original.args), None, traceback
+ finally:
+ del traceback
+
+
+class MercurialFactory(RepoFactory):
+
+ def _create_config(self, config, hooks=True):
+ if not hooks:
+ hooks_to_clean = frozenset((
+ 'changegroup.repo_size', 'preoutgoing.pre_pull',
+ 'outgoing.pull_logger', 'prechangegroup.pre_push'))
+ new_config = []
+ for section, option, value in config:
+ if section == 'hooks' and option in hooks_to_clean:
+ continue
+ new_config.append((section, option, value))
+ config = new_config
+
+ baseui = make_ui_from_config(config)
+ return baseui
+
+ def _create_repo(self, wire, create):
+ baseui = self._create_config(wire["config"])
+ return localrepository(baseui, wire["path"], create)
+
+
+class HgRemote(object):
+
+ def __init__(self, factory):
+ self._factory = factory
+
+ self._bulk_methods = {
+ "affected_files": self.ctx_files,
+ "author": self.ctx_user,
+ "branch": self.ctx_branch,
+ "children": self.ctx_children,
+ "date": self.ctx_date,
+ "message": self.ctx_description,
+ "parents": self.ctx_parents,
+ "status": self.ctx_status,
+ "_file_paths": self.ctx_list,
+ }
+
+ @reraise_safe_exceptions
+ def archive_repo(self, archive_path, mtime, file_info, kind):
+ if kind == "tgz":
+ archiver = archival.tarit(archive_path, mtime, "gz")
+ elif kind == "tbz2":
+ archiver = archival.tarit(archive_path, mtime, "bz2")
+ elif kind == 'zip':
+ archiver = archival.zipit(archive_path, mtime)
+ else:
+ raise exceptions.ArchiveException(
+ 'Remote does not support: "%s".' % kind)
+
+ for f_path, f_mode, f_is_link, f_content in file_info:
+ archiver.addfile(f_path, f_mode, f_is_link, f_content)
+ archiver.done()
+
+ @reraise_safe_exceptions
+ def bookmarks(self, wire):
+ repo = self._factory.repo(wire)
+ return dict(repo._bookmarks)
+
+ @reraise_safe_exceptions
+ def branches(self, wire, normal, closed):
+ repo = self._factory.repo(wire)
+ iter_branches = repo.branchmap().iterbranches()
+ bt = {}
+ for branch_name, _heads, tip, is_closed in iter_branches:
+ if normal and not is_closed:
+ bt[branch_name] = tip
+ if closed and is_closed:
+ bt[branch_name] = tip
+
+ return bt
+
+ @reraise_safe_exceptions
+ def bulk_request(self, wire, rev, pre_load):
+ result = {}
+ for attr in pre_load:
+ try:
+ method = self._bulk_methods[attr]
+ result[attr] = method(wire, rev)
+ except KeyError:
+ raise exceptions.VcsException(
+ 'Unknown bulk attribute: "%s"' % attr)
+ return result
+
+ @reraise_safe_exceptions
+ def clone(self, wire, source, dest, update_after_clone=False, hooks=True):
+ baseui = self._factory._create_config(wire["config"], hooks=hooks)
+ clone(baseui, source, dest, noupdate=not update_after_clone)
+
+ @reraise_safe_exceptions
+ def commitctx(
+ self, wire, message, parents, commit_time, commit_timezone,
+ user, files, extra, removed, updated):
+
+ def _filectxfn(_repo, memctx, path):
+ """
+ Marks given path as added/changed/removed in a given _repo. This is
+ for internal mercurial commit function.
+ """
+
+ # check if this path is removed
+ if path in removed:
+ # returning None is a way to mark node for removal
+ return None
+
+ # check if this path is added
+ for node in updated:
+ if node['path'] == path:
+ return memfilectx(
+ _repo,
+ path=node['path'],
+ data=node['content'],
+ islink=False,
+ isexec=bool(node['mode'] & stat.S_IXUSR),
+ copied=False,
+ memctx=memctx)
+
+ raise exceptions.AbortException(
+ "Given path haven't been marked as added, "
+ "changed or removed (%s)" % path)
+
+ repo = self._factory.repo(wire)
+
+ commit_ctx = memctx(
+ repo=repo,
+ parents=parents,
+ text=message,
+ files=files,
+ filectxfn=_filectxfn,
+ user=user,
+ date=(commit_time, commit_timezone),
+ extra=extra)
+
+ n = repo.commitctx(commit_ctx)
+ new_id = hex(n)
+
+ return new_id
+
+ @reraise_safe_exceptions
+ def ctx_branch(self, wire, revision):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ return ctx.branch()
+
+ @reraise_safe_exceptions
+ def ctx_children(self, wire, revision):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ return [child.rev() for child in ctx.children()]
+
+ @reraise_safe_exceptions
+ def ctx_date(self, wire, revision):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ return ctx.date()
+
+ @reraise_safe_exceptions
+ def ctx_description(self, wire, revision):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ return ctx.description()
+
+ @reraise_safe_exceptions
+ def ctx_diff(
+ self, wire, revision, git=True, ignore_whitespace=True, context=3):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ result = ctx.diff(
+ git=git, ignore_whitespace=ignore_whitespace, context=context)
+ return list(result)
+
+ @reraise_safe_exceptions
+ def ctx_files(self, wire, revision):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ return ctx.files()
+
+ @reraise_safe_exceptions
+ def ctx_list(self, path, revision):
+ repo = self._factory.repo(path)
+ ctx = repo[revision]
+ return list(ctx)
+
+ @reraise_safe_exceptions
+ def ctx_parents(self, wire, revision):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ return [parent.rev() for parent in ctx.parents()]
+
+ @reraise_safe_exceptions
+ def ctx_substate(self, wire, revision):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ return ctx.substate
+
+ @reraise_safe_exceptions
+ def ctx_status(self, wire, revision):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ status = repo[ctx.p1().node()].status(other=ctx.node())
+ # object of status (odd, custom named tuple in mercurial) is not
+ # correctly serializable via Pyro, we make it a list, as the underling
+ # API expects this to be a list
+ return list(status)
+
+ @reraise_safe_exceptions
+ def ctx_user(self, wire, revision):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ return ctx.user()
+
+ @reraise_safe_exceptions
+ def check_url(self, url, config):
+ _proto = None
+ if '+' in url[:url.find('://')]:
+ _proto = url[0:url.find('+')]
+ url = url[url.find('+') + 1:]
+ handlers = []
+ url_obj = hg_url(url)
+ test_uri, authinfo = url_obj.authinfo()
+ url_obj.passwd = '*****'
+ cleaned_uri = str(url_obj)
+
+ if authinfo:
+ # create a password manager
+ passmgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
+ passmgr.add_password(*authinfo)
+
+ handlers.extend((httpbasicauthhandler(passmgr),
+ httpdigestauthhandler(passmgr)))
+
+ o = urllib2.build_opener(*handlers)
+ o.addheaders = [('Content-Type', 'application/mercurial-0.1'),
+ ('Accept', 'application/mercurial-0.1')]
+
+ q = {"cmd": 'between'}
+ q.update({'pairs': "%s-%s" % ('0' * 40, '0' * 40)})
+ qs = '?%s' % urllib.urlencode(q)
+ cu = "%s%s" % (test_uri, qs)
+ req = urllib2.Request(cu, None, {})
+
+ try:
+ resp = o.open(req)
+ if resp.code != 200:
+ raise exceptions.URLError('Return Code is not 200')
+ except Exception as e:
+ # means it cannot be cloned
+ raise exceptions.URLError("[%s] org_exc: %s" % (cleaned_uri, e))
+
+ # now check if it's a proper hg repo, but don't do it for svn
+ try:
+ if _proto == 'svn':
+ pass
+ else:
+ # check for pure hg repos
+ httppeer(make_ui_from_config(config), url).lookup('tip')
+ except Exception as e:
+ raise exceptions.URLError(
+ "url [%s] does not look like an hg repo org_exc: %s"
+ % (cleaned_uri, e))
+
+ return True
+
+ @reraise_safe_exceptions
+ def diff(
+ self, wire, rev1, rev2, file_filter, opt_git, opt_ignorews,
+ context):
+ repo = self._factory.repo(wire)
+
+ if file_filter:
+ filter = match(file_filter[0], '', [file_filter[1]])
+ else:
+ filter = file_filter
+ opts = diffopts(git=opt_git, ignorews=opt_ignorews, context=context)
+
+ try:
+ return "".join(patch.diff(
+ repo, node1=rev1, node2=rev2, match=filter, opts=opts))
+ except RepoLookupError:
+ raise exceptions.LookupException()
+
+ @reraise_safe_exceptions
+ def file_history(self, wire, revision, path, limit):
+ repo = self._factory.repo(wire)
+
+ ctx = repo[revision]
+ fctx = ctx.filectx(path)
+
+ def history_iter():
+ limit_rev = fctx.rev()
+ for obj in reversed(list(fctx.filelog())):
+ obj = fctx.filectx(obj)
+ if limit_rev >= obj.rev():
+ yield obj
+
+ history = []
+ for cnt, obj in enumerate(history_iter()):
+ if limit and cnt >= limit:
+ break
+ history.append(hex(obj.node()))
+
+ return [x for x in history]
+
+ @reraise_safe_exceptions
+ def file_history_untill(self, wire, revision, path, limit):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ fctx = ctx.filectx(path)
+
+ file_log = list(fctx.filelog())
+ if limit:
+ # Limit to the last n items
+ file_log = file_log[-limit:]
+
+ return [hex(fctx.filectx(cs).node()) for cs in reversed(file_log)]
+
+ @reraise_safe_exceptions
+ def fctx_annotate(self, wire, revision, path):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ fctx = ctx.filectx(path)
+
+ result = []
+ for i, annotate_data in enumerate(fctx.annotate()):
+ ln_no = i + 1
+ sha = hex(annotate_data[0].node())
+ result.append((ln_no, sha, annotate_data[1]))
+ return result
+
+ @reraise_safe_exceptions
+ def fctx_data(self, wire, revision, path):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ fctx = ctx.filectx(path)
+ return fctx.data()
+
+ @reraise_safe_exceptions
+ def fctx_flags(self, wire, revision, path):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ fctx = ctx.filectx(path)
+ return fctx.flags()
+
+ @reraise_safe_exceptions
+ def fctx_size(self, wire, revision, path):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ fctx = ctx.filectx(path)
+ return fctx.size()
+
+ @reraise_safe_exceptions
+ def get_all_commit_ids(self, wire, name):
+ repo = self._factory.repo(wire)
+ revs = repo.filtered(name).changelog.index
+ return map(lambda x: hex(x[7]), revs)[:-1]
+
+ @reraise_safe_exceptions
+ def get_config_value(self, wire, section, name, untrusted=False):
+ repo = self._factory.repo(wire)
+ return repo.ui.config(section, name, untrusted=untrusted)
+
+ @reraise_safe_exceptions
+ def get_config_bool(self, wire, section, name, untrusted=False):
+ repo = self._factory.repo(wire)
+ return repo.ui.configbool(section, name, untrusted=untrusted)
+
+ @reraise_safe_exceptions
+ def get_config_list(self, wire, section, name, untrusted=False):
+ repo = self._factory.repo(wire)
+ return repo.ui.configlist(section, name, untrusted=untrusted)
+
+ @reraise_safe_exceptions
+ def is_large_file(self, wire, path):
+ return largefiles.lfutil.isstandin(path)
+
+ @reraise_safe_exceptions
+ def in_store(self, wire, sha):
+ repo = self._factory.repo(wire)
+ return largefiles.lfutil.instore(repo, sha)
+
+ @reraise_safe_exceptions
+ def in_user_cache(self, wire, sha):
+ repo = self._factory.repo(wire)
+ return largefiles.lfutil.inusercache(repo.ui, sha)
+
+ @reraise_safe_exceptions
+ def store_path(self, wire, sha):
+ repo = self._factory.repo(wire)
+ return largefiles.lfutil.storepath(repo, sha)
+
+ @reraise_safe_exceptions
+ def link(self, wire, sha, path):
+ repo = self._factory.repo(wire)
+ largefiles.lfutil.link(
+ largefiles.lfutil.usercachepath(repo.ui, sha), path)
+
+ @reraise_safe_exceptions
+ def localrepository(self, wire, create=False):
+ self._factory.repo(wire, create=create)
+
+ @reraise_safe_exceptions
+ def lookup(self, wire, revision, both):
+ # TODO Paris: Ugly hack to "deserialize" long for msgpack
+ if isinstance(revision, float):
+ revision = long(revision)
+ repo = self._factory.repo(wire)
+ try:
+ ctx = repo[revision]
+ except RepoLookupError:
+ raise exceptions.LookupException(revision)
+ except LookupError as e:
+ raise exceptions.LookupException(e.name)
+
+ if not both:
+ return ctx.hex()
+
+ ctx = repo[ctx.hex()]
+ return ctx.hex(), ctx.rev()
+
+ @reraise_safe_exceptions
+ def pull(self, wire, url, commit_ids=None):
+ repo = self._factory.repo(wire)
+ remote = peer(repo, {}, url)
+ if commit_ids:
+ commit_ids = [bin(commit_id) for commit_id in commit_ids]
+
+ return exchange.pull(
+ repo, remote, heads=commit_ids, force=None).cgresult
+
+ @reraise_safe_exceptions
+ def revision(self, wire, rev):
+ repo = self._factory.repo(wire)
+ ctx = repo[rev]
+ return ctx.rev()
+
+ @reraise_safe_exceptions
+ def rev_range(self, wire, filter):
+ repo = self._factory.repo(wire)
+ revisions = [rev for rev in revrange(repo, filter)]
+ return revisions
+
+ @reraise_safe_exceptions
+ def rev_range_hash(self, wire, node):
+ repo = self._factory.repo(wire)
+
+ def get_revs(repo, rev_opt):
+ if rev_opt:
+ revs = revrange(repo, rev_opt)
+ if len(revs) == 0:
+ return (nullrev, nullrev)
+ return max(revs), min(revs)
+ else:
+ return len(repo) - 1, 0
+
+ stop, start = get_revs(repo, [node + ':'])
+ revs = [hex(repo[r].node()) for r in xrange(start, stop + 1)]
+ return revs
+
+ @reraise_safe_exceptions
+ def revs_from_revspec(self, wire, rev_spec, *args, **kwargs):
+ other_path = kwargs.pop('other_path', None)
+
+ # case when we want to compare two independent repositories
+ if other_path and other_path != wire["path"]:
+ baseui = self._factory._create_config(wire["config"])
+ repo = unionrepo.unionrepository(baseui, other_path, wire["path"])
+ else:
+ repo = self._factory.repo(wire)
+ return list(repo.revs(rev_spec, *args))
+
+ @reraise_safe_exceptions
+ def strip(self, wire, revision, update, backup):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ hgext_strip(
+ repo.baseui, repo, ctx.node(), update=update, backup=backup)
+
+ @reraise_safe_exceptions
+ def tag(self, wire, name, revision, message, local, user,
+ tag_time, tag_timezone):
+ repo = self._factory.repo(wire)
+ ctx = repo[revision]
+ node = ctx.node()
+
+ date = (tag_time, tag_timezone)
+ try:
+ repo.tag(name, node, message, local, user, date)
+ except Abort:
+ log.exception("Tag operation aborted")
+ raise exceptions.AbortException()
+
+ @reraise_safe_exceptions
+ def tags(self, wire):
+ repo = self._factory.repo(wire)
+ return repo.tags()
+
+ @reraise_safe_exceptions
+ def update(self, wire, node=None, clean=False):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'])
+ commands.update(baseui, repo, node=node, clean=clean)
+
+ @reraise_safe_exceptions
+ def identify(self, wire):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'])
+ output = io.BytesIO()
+ baseui.write = output.write
+ # This is required to get a full node id
+ baseui.debugflag = True
+ commands.identify(baseui, repo, id=True)
+
+ return output.getvalue()
+
+ @reraise_safe_exceptions
+ def pull_cmd(self, wire, source, bookmark=None, branch=None, revision=None,
+ hooks=True):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'], hooks=hooks)
+
+ # Mercurial internally has a lot of logic that checks ONLY if
+ # option is defined, we just pass those if they are defined then
+ opts = {}
+ if bookmark:
+ opts['bookmark'] = bookmark
+ if branch:
+ opts['branch'] = branch
+ if revision:
+ opts['rev'] = revision
+
+ commands.pull(baseui, repo, source, **opts)
+
+ @reraise_safe_exceptions
+ def heads(self, wire, branch=None):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'])
+ output = io.BytesIO()
+
+ def write(data, **unused_kwargs):
+ output.write(data)
+
+ baseui.write = write
+ if branch:
+ args = [branch]
+ else:
+ args = []
+ commands.heads(baseui, repo, template='{node} ', *args)
+
+ return output.getvalue()
+
+ @reraise_safe_exceptions
+ def ancestor(self, wire, revision1, revision2):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'])
+ output = io.BytesIO()
+ baseui.write = output.write
+ commands.debugancestor(baseui, repo, revision1, revision2)
+
+ return output.getvalue()
+
+ @reraise_safe_exceptions
+ def push(self, wire, revisions, dest_path, hooks=True,
+ push_branches=False):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'], hooks=hooks)
+ commands.push(baseui, repo, dest=dest_path, rev=revisions,
+ new_branch=push_branches)
+
+ @reraise_safe_exceptions
+ def merge(self, wire, revision):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'])
+ repo.ui.setconfig('ui', 'merge', 'internal:dump')
+ commands.merge(baseui, repo, rev=revision)
+
+ @reraise_safe_exceptions
+ def commit(self, wire, message, username):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'])
+ repo.ui.setconfig('ui', 'username', username)
+ commands.commit(baseui, repo, message=message)
+
+ @reraise_safe_exceptions
+ def rebase(self, wire, source=None, dest=None, abort=False):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'])
+ repo.ui.setconfig('ui', 'merge', 'internal:dump')
+ rebase.rebase(
+ baseui, repo, base=source, dest=dest, abort=abort, keep=not abort)
+
+ @reraise_safe_exceptions
+ def bookmark(self, wire, bookmark, revision=None):
+ repo = self._factory.repo(wire)
+ baseui = self._factory._create_config(wire['config'])
+ commands.bookmark(baseui, repo, bookmark, rev=revision, force=True)
diff --git a/vcsserver/hgcompat.py b/vcsserver/hgcompat.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/hgcompat.py
@@ -0,0 +1,61 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+"""
+Mercurial libs compatibility
+"""
+
+import mercurial
+import mercurial.demandimport
+# patch demandimport, due to bug in mercurial when it always triggers
+# demandimport.enable()
+mercurial.demandimport.enable = lambda *args, **kwargs: 1
+
+from mercurial import ui
+from mercurial import patch
+from mercurial import config
+from mercurial import extensions
+from mercurial import scmutil
+from mercurial import archival
+from mercurial import discovery
+from mercurial import unionrepo
+from mercurial import localrepo
+from mercurial import merge as hg_merge
+
+from mercurial.commands import clone, nullid, pull
+from mercurial.context import memctx, memfilectx
+from mercurial.error import (
+ LookupError, RepoError, RepoLookupError, Abort, InterventionRequired,
+ RequirementError)
+from mercurial.hgweb import hgweb_mod
+from mercurial.localrepo import localrepository
+from mercurial.match import match
+from mercurial.mdiff import diffopts
+from mercurial.node import bin, hex
+from mercurial.encoding import tolocal
+from mercurial.discovery import findcommonoutgoing
+from mercurial.hg import peer
+from mercurial.httppeer import httppeer
+from mercurial.util import url as hg_url
+from mercurial.scmutil import revrange
+from mercurial.node import nullrev
+from mercurial import exchange
+from hgext import largefiles
+
+# those authnadlers are patched for python 2.6.5 bug an
+# infinit looping when given invalid resources
+from mercurial.url import httpbasicauthhandler, httpdigestauthhandler
diff --git a/vcsserver/hgpatches.py b/vcsserver/hgpatches.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/hgpatches.py
@@ -0,0 +1,60 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+"""
+Adjustments to Mercurial
+
+Intentionally kept separate from `hgcompat` and `hg`, so that these patches can
+be applied without having to import the whole Mercurial machinery.
+
+Imports are function local, so that just importing this module does not cause
+side-effects other than these functions being defined.
+"""
+
+import logging
+
+
+def patch_largefiles_capabilities():
+ """
+ Patches the capabilities function in the largefiles extension.
+ """
+ from vcsserver import hgcompat
+ lfproto = hgcompat.largefiles.proto
+ wrapper = _dynamic_capabilities_wrapper(
+ lfproto, hgcompat.extensions.extensions)
+ lfproto.capabilities = wrapper
+
+
+def _dynamic_capabilities_wrapper(lfproto, extensions):
+
+ wrapped_capabilities = lfproto.capabilities
+ logger = logging.getLogger('vcsserver.hg')
+
+ def _dynamic_capabilities(repo, proto):
+ """
+ Adds dynamic behavior, so that the capability is only added if the
+ extension is enabled in the current ui object.
+ """
+ if 'largefiles' in dict(extensions(repo.ui)):
+ logger.debug('Extension largefiles enabled')
+ calc_capabilities = wrapped_capabilities
+ else:
+ logger.debug('Extension largefiles disabled')
+ calc_capabilities = lfproto.capabilitiesorig
+ return calc_capabilities(repo, proto)
+
+ return _dynamic_capabilities
diff --git a/vcsserver/hooks.py b/vcsserver/hooks.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/hooks.py
@@ -0,0 +1,372 @@
+# -*- coding: utf-8 -*-
+
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import collections
+import importlib
+import io
+import json
+import subprocess
+import sys
+from httplib import HTTPConnection
+
+
+import mercurial.scmutil
+import mercurial.node
+import Pyro4
+import simplejson as json
+
+from vcsserver import exceptions
+
+
+class HooksHttpClient(object):
+ connection = None
+
+ def __init__(self, hooks_uri):
+ self.hooks_uri = hooks_uri
+
+ def __call__(self, method, extras):
+ connection = HTTPConnection(self.hooks_uri)
+ body = self._serialize(method, extras)
+ connection.request('POST', '/', body)
+ response = connection.getresponse()
+ return json.loads(response.read())
+
+ def _serialize(self, hook_name, extras):
+ data = {
+ 'method': hook_name,
+ 'extras': extras
+ }
+ return json.dumps(data)
+
+
+class HooksDummyClient(object):
+ def __init__(self, hooks_module):
+ self._hooks_module = importlib.import_module(hooks_module)
+
+ def __call__(self, hook_name, extras):
+ with self._hooks_module.Hooks() as hooks:
+ return getattr(hooks, hook_name)(extras)
+
+
+class HooksPyro4Client(object):
+ def __init__(self, hooks_uri):
+ self.hooks_uri = hooks_uri
+
+ def __call__(self, hook_name, extras):
+ with Pyro4.Proxy(self.hooks_uri) as hooks:
+ return getattr(hooks, hook_name)(extras)
+
+
+class RemoteMessageWriter(object):
+ """Writer base class."""
+ def write(message):
+ raise NotImplementedError()
+
+
+class HgMessageWriter(RemoteMessageWriter):
+ """Writer that knows how to send messages to mercurial clients."""
+
+ def __init__(self, ui):
+ self.ui = ui
+
+ def write(self, message):
+ # TODO: Check why the quiet flag is set by default.
+ old = self.ui.quiet
+ self.ui.quiet = False
+ self.ui.status(message.encode('utf-8'))
+ self.ui.quiet = old
+
+
+class GitMessageWriter(RemoteMessageWriter):
+ """Writer that knows how to send messages to git clients."""
+
+ def __init__(self, stdout=None):
+ self.stdout = stdout or sys.stdout
+
+ def write(self, message):
+ self.stdout.write(message.encode('utf-8'))
+
+
+def _handle_exception(result):
+ exception_class = result.get('exception')
+ if exception_class == 'HTTPLockedRC':
+ raise exceptions.RepositoryLockedException(*result['exception_args'])
+ elif exception_class == 'RepositoryError':
+ raise exceptions.VcsException(*result['exception_args'])
+ elif exception_class:
+ raise Exception('Got remote exception "%s" with args "%s"' %
+ (exception_class, result['exception_args']))
+
+
+def _get_hooks_client(extras):
+ if 'hooks_uri' in extras:
+ protocol = extras.get('hooks_protocol')
+ return (
+ HooksHttpClient(extras['hooks_uri'])
+ if protocol == 'http'
+ else HooksPyro4Client(extras['hooks_uri'])
+ )
+ else:
+ return HooksDummyClient(extras['hooks_module'])
+
+
+def _call_hook(hook_name, extras, writer):
+ hooks = _get_hooks_client(extras)
+ result = hooks(hook_name, extras)
+ writer.write(result['output'])
+ _handle_exception(result)
+
+ return result['status']
+
+
+def _extras_from_ui(ui):
+ extras = json.loads(ui.config('rhodecode', 'RC_SCM_DATA'))
+ return extras
+
+
+def repo_size(ui, repo, **kwargs):
+ return _call_hook('repo_size', _extras_from_ui(ui), HgMessageWriter(ui))
+
+
+def pre_pull(ui, repo, **kwargs):
+ return _call_hook('pre_pull', _extras_from_ui(ui), HgMessageWriter(ui))
+
+
+def post_pull(ui, repo, **kwargs):
+ return _call_hook('post_pull', _extras_from_ui(ui), HgMessageWriter(ui))
+
+
+def pre_push(ui, repo, **kwargs):
+ return _call_hook('pre_push', _extras_from_ui(ui), HgMessageWriter(ui))
+
+
+# N.B.(skreft): the two functions below were taken and adapted from
+# rhodecode.lib.vcs.remote.handle_git_pre_receive
+# They are required to compute the commit_ids
+def _get_revs(repo, rev_opt):
+ revs = [rev for rev in mercurial.scmutil.revrange(repo, rev_opt)]
+ if len(revs) == 0:
+ return (mercurial.node.nullrev, mercurial.node.nullrev)
+
+ return max(revs), min(revs)
+
+
+def _rev_range_hash(repo, node):
+ stop, start = _get_revs(repo, [node + ':'])
+ revs = [mercurial.node.hex(repo[r].node()) for r in xrange(start, stop + 1)]
+
+ return revs
+
+
+def post_push(ui, repo, node, **kwargs):
+ commit_ids = _rev_range_hash(repo, node)
+
+ extras = _extras_from_ui(ui)
+ extras['commit_ids'] = commit_ids
+
+ return _call_hook('post_push', extras, HgMessageWriter(ui))
+
+
+# backward compat
+log_pull_action = post_pull
+
+# backward compat
+log_push_action = post_push
+
+
+def handle_git_pre_receive(unused_repo_path, unused_revs, unused_env):
+ """
+ Old hook name: keep here for backward compatibility.
+
+ This is only required when the installed git hooks are not upgraded.
+ """
+ pass
+
+
+def handle_git_post_receive(unused_repo_path, unused_revs, unused_env):
+ """
+ Old hook name: keep here for backward compatibility.
+
+ This is only required when the installed git hooks are not upgraded.
+ """
+ pass
+
+
+HookResponse = collections.namedtuple('HookResponse', ('status', 'output'))
+
+
+def git_pre_pull(extras):
+ """
+ Pre pull hook.
+
+ :param extras: dictionary containing the keys defined in simplevcs
+ :type extras: dict
+
+ :return: status code of the hook. 0 for success.
+ :rtype: int
+ """
+ if 'pull' not in extras['hooks']:
+ return HookResponse(0, '')
+
+ stdout = io.BytesIO()
+ try:
+ status = _call_hook('pre_pull', extras, GitMessageWriter(stdout))
+ except Exception as error:
+ status = 128
+ stdout.write('ERROR: %s\n' % str(error))
+
+ return HookResponse(status, stdout.getvalue())
+
+
+def git_post_pull(extras):
+ """
+ Post pull hook.
+
+ :param extras: dictionary containing the keys defined in simplevcs
+ :type extras: dict
+
+ :return: status code of the hook. 0 for success.
+ :rtype: int
+ """
+ if 'pull' not in extras['hooks']:
+ return HookResponse(0, '')
+
+ stdout = io.BytesIO()
+ try:
+ status = _call_hook('post_pull', extras, GitMessageWriter(stdout))
+ except Exception as error:
+ status = 128
+ stdout.write('ERROR: %s\n' % error)
+
+ return HookResponse(status, stdout.getvalue())
+
+
+def git_pre_receive(unused_repo_path, unused_revs, env):
+ """
+ Pre push hook.
+
+ :param extras: dictionary containing the keys defined in simplevcs
+ :type extras: dict
+
+ :return: status code of the hook. 0 for success.
+ :rtype: int
+ """
+ extras = json.loads(env['RC_SCM_DATA'])
+ if 'push' not in extras['hooks']:
+ return 0
+ return _call_hook('pre_push', extras, GitMessageWriter())
+
+
+def _run_command(arguments):
+ """
+ Run the specified command and return the stdout.
+
+ :param arguments: sequence of program arugments (including the program name)
+ :type arguments: list[str]
+ """
+ # TODO(skreft): refactor this method and all the other similar ones.
+ # Probably this should be using subprocessio.
+ process = subprocess.Popen(
+ arguments, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ stdout, _ = process.communicate()
+
+ if process.returncode != 0:
+ raise Exception(
+ 'Command %s exited with exit code %s' % (arguments,
+ process.returncode))
+
+ return stdout
+
+
+def git_post_receive(unused_repo_path, revision_lines, env):
+ """
+ Post push hook.
+
+ :param extras: dictionary containing the keys defined in simplevcs
+ :type extras: dict
+
+ :return: status code of the hook. 0 for success.
+ :rtype: int
+ """
+ extras = json.loads(env['RC_SCM_DATA'])
+ if 'push' not in extras['hooks']:
+ return 0
+
+ rev_data = []
+ for revision_line in revision_lines:
+ old_rev, new_rev, ref = revision_line.strip().split(' ')
+ ref_data = ref.split('/', 2)
+ if ref_data[1] in ('tags', 'heads'):
+ rev_data.append({
+ 'old_rev': old_rev,
+ 'new_rev': new_rev,
+ 'ref': ref,
+ 'type': ref_data[1],
+ 'name': ref_data[2],
+ })
+
+ git_revs = []
+
+ # N.B.(skreft): it is ok to just call git, as git before calling a
+ # subcommand sets the PATH environment variable so that it point to the
+ # correct version of the git executable.
+ empty_commit_id = '0' * 40
+ for push_ref in rev_data:
+ type_ = push_ref['type']
+ if type_ == 'heads':
+ if push_ref['old_rev'] == empty_commit_id:
+
+ # Fix up head revision if needed
+ cmd = ['git', 'show', 'HEAD']
+ try:
+ _run_command(cmd)
+ except Exception:
+ cmd = ['git', 'symbolic-ref', 'HEAD',
+ 'refs/heads/%s' % push_ref['name']]
+ print "Setting default branch to %s" % push_ref['name']
+ _run_command(cmd)
+
+ cmd = ['git', 'for-each-ref', '--format=%(refname)',
+ 'refs/heads/*']
+ heads = _run_command(cmd)
+ heads = heads.replace(push_ref['ref'], '')
+ heads = ' '.join(head for head in heads.splitlines() if head)
+ cmd = ['git', 'log', '--reverse', '--pretty=format:%H',
+ '--', push_ref['new_rev'], '--not', heads]
+ git_revs.extend(_run_command(cmd).splitlines())
+ elif push_ref['new_rev'] == empty_commit_id:
+ # delete branch case
+ git_revs.append('delete_branch=>%s' % push_ref['name'])
+ else:
+ cmd = ['git', 'log',
+ '{old_rev}..{new_rev}'.format(**push_ref),
+ '--reverse', '--pretty=format:%H']
+ git_revs.extend(_run_command(cmd).splitlines())
+ elif type_ == 'tags':
+ git_revs.append('tag=>%s' % push_ref['name'])
+
+ extras['commit_ids'] = git_revs
+
+ if 'repo_size' in extras['hooks']:
+ try:
+ _call_hook('repo_size', extras, GitMessageWriter())
+ except:
+ pass
+
+ return _call_hook('post_push', extras, GitMessageWriter())
diff --git a/vcsserver/http_main.py b/vcsserver/http_main.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/http_main.py
@@ -0,0 +1,335 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import base64
+import locale
+import logging
+import uuid
+import wsgiref.util
+from itertools import chain
+
+import msgpack
+from beaker.cache import CacheManager
+from beaker.util import parse_cache_config_options
+from pyramid.config import Configurator
+from pyramid.wsgi import wsgiapp
+
+from vcsserver import remote_wsgi, scm_app, settings
+from vcsserver.echo_stub import remote_wsgi as remote_wsgi_stub
+from vcsserver.echo_stub.echo_app import EchoApp
+from vcsserver.server import VcsServer
+
+try:
+ from vcsserver.git import GitFactory, GitRemote
+except ImportError:
+ GitFactory = None
+ GitRemote = None
+try:
+ from vcsserver.hg import MercurialFactory, HgRemote
+except ImportError:
+ MercurialFactory = None
+ HgRemote = None
+try:
+ from vcsserver.svn import SubversionFactory, SvnRemote
+except ImportError:
+ SubversionFactory = None
+ SvnRemote = None
+
+log = logging.getLogger(__name__)
+
+
+class VCS(object):
+ def __init__(self, locale=None, cache_config=None):
+ self.locale = locale
+ self.cache_config = cache_config
+ self._configure_locale()
+ self._initialize_cache()
+
+ if GitFactory and GitRemote:
+ git_repo_cache = self.cache.get_cache_region(
+ 'git', region='repo_object')
+ git_factory = GitFactory(git_repo_cache)
+ self._git_remote = GitRemote(git_factory)
+ else:
+ log.info("Git client import failed")
+
+ if MercurialFactory and HgRemote:
+ hg_repo_cache = self.cache.get_cache_region(
+ 'hg', region='repo_object')
+ hg_factory = MercurialFactory(hg_repo_cache)
+ self._hg_remote = HgRemote(hg_factory)
+ else:
+ log.info("Mercurial client import failed")
+
+ if SubversionFactory and SvnRemote:
+ svn_repo_cache = self.cache.get_cache_region(
+ 'svn', region='repo_object')
+ svn_factory = SubversionFactory(svn_repo_cache)
+ self._svn_remote = SvnRemote(svn_factory, hg_factory=hg_factory)
+ else:
+ log.info("Subversion client import failed")
+
+ self._vcsserver = VcsServer()
+
+ def _initialize_cache(self):
+ cache_config = parse_cache_config_options(self.cache_config)
+ log.info('Initializing beaker cache: %s' % cache_config)
+ self.cache = CacheManager(**cache_config)
+
+ def _configure_locale(self):
+ if self.locale:
+ log.info('Settings locale: `LC_ALL` to %s' % self.locale)
+ else:
+ log.info(
+ 'Configuring locale subsystem based on environment variables')
+ try:
+ # If self.locale is the empty string, then the locale
+ # module will use the environment variables. See the
+ # documentation of the package `locale`.
+ locale.setlocale(locale.LC_ALL, self.locale)
+
+ language_code, encoding = locale.getlocale()
+ log.info(
+ 'Locale set to language code "%s" with encoding "%s".',
+ language_code, encoding)
+ except locale.Error:
+ log.exception(
+ 'Cannot set locale, not configuring the locale system')
+
+
+class WsgiProxy(object):
+ def __init__(self, wsgi):
+ self.wsgi = wsgi
+
+ def __call__(self, environ, start_response):
+ input_data = environ['wsgi.input'].read()
+ input_data = msgpack.unpackb(input_data)
+
+ error = None
+ try:
+ data, status, headers = self.wsgi.handle(
+ input_data['environment'], input_data['input_data'],
+ *input_data['args'], **input_data['kwargs'])
+ except Exception as e:
+ data, status, headers = [], None, None
+ error = {
+ 'message': str(e),
+ '_vcs_kind': getattr(e, '_vcs_kind', None)
+ }
+
+ start_response(200, {})
+ return self._iterator(error, status, headers, data)
+
+ def _iterator(self, error, status, headers, data):
+ initial_data = [
+ error,
+ status,
+ headers,
+ ]
+
+ for d in chain(initial_data, data):
+ yield msgpack.packb(d)
+
+
+class HTTPApplication(object):
+ ALLOWED_EXCEPTIONS = ('KeyError', 'URLError')
+
+ remote_wsgi = remote_wsgi
+ _use_echo_app = False
+
+ def __init__(self, settings=None):
+ self.config = Configurator(settings=settings)
+ locale = settings.get('', 'en_US.UTF-8')
+ vcs = VCS(locale=locale, cache_config=settings)
+ self._remotes = {
+ 'hg': vcs._hg_remote,
+ 'git': vcs._git_remote,
+ 'svn': vcs._svn_remote,
+ 'server': vcs._vcsserver,
+ }
+ if settings.get('dev.use_echo_app', 'false').lower() == 'true':
+ self._use_echo_app = True
+ log.warning("Using EchoApp for VCS operations.")
+ self.remote_wsgi = remote_wsgi_stub
+ self._configure_settings(settings)
+ self._configure()
+
+ def _configure_settings(self, app_settings):
+ """
+ Configure the settings module.
+ """
+ git_path = app_settings.get('git_path', None)
+ if git_path:
+ settings.GIT_EXECUTABLE = git_path
+
+ def _configure(self):
+ self.config.add_renderer(
+ name='msgpack',
+ factory=self._msgpack_renderer_factory)
+
+ self.config.add_route('status', '/status')
+ self.config.add_route('hg_proxy', '/proxy/hg')
+ self.config.add_route('git_proxy', '/proxy/git')
+ self.config.add_route('vcs', '/{backend}')
+ self.config.add_route('stream_git', '/stream/git/*repo_name')
+ self.config.add_route('stream_hg', '/stream/hg/*repo_name')
+
+ self.config.add_view(
+ self.status_view, route_name='status', renderer='json')
+ self.config.add_view(self.hg_proxy(), route_name='hg_proxy')
+ self.config.add_view(self.git_proxy(), route_name='git_proxy')
+ self.config.add_view(
+ self.vcs_view, route_name='vcs', renderer='msgpack')
+
+ self.config.add_view(self.hg_stream(), route_name='stream_hg')
+ self.config.add_view(self.git_stream(), route_name='stream_git')
+
+ def wsgi_app(self):
+ return self.config.make_wsgi_app()
+
+ def vcs_view(self, request):
+ remote = self._remotes[request.matchdict['backend']]
+ payload = msgpack.unpackb(request.body, use_list=True)
+ method = payload.get('method')
+ params = payload.get('params')
+ wire = params.get('wire')
+ args = params.get('args')
+ kwargs = params.get('kwargs')
+ if wire:
+ try:
+ wire['context'] = uuid.UUID(wire['context'])
+ except KeyError:
+ pass
+ args.insert(0, wire)
+
+ try:
+ resp = getattr(remote, method)(*args, **kwargs)
+ except Exception as e:
+ type_ = e.__class__.__name__
+ if type_ not in self.ALLOWED_EXCEPTIONS:
+ type_ = None
+
+ resp = {
+ 'id': payload.get('id'),
+ 'error': {
+ 'message': e.message,
+ 'type': type_
+ }
+ }
+ try:
+ resp['error']['_vcs_kind'] = e._vcs_kind
+ except AttributeError:
+ pass
+ else:
+ resp = {
+ 'id': payload.get('id'),
+ 'result': resp
+ }
+
+ return resp
+
+ def status_view(self, request):
+ return {'status': 'OK'}
+
+ def _msgpack_renderer_factory(self, info):
+ def _render(value, system):
+ value = msgpack.packb(value)
+ request = system.get('request')
+ if request is not None:
+ response = request.response
+ ct = response.content_type
+ if ct == response.default_content_type:
+ response.content_type = 'application/x-msgpack'
+ return value
+ return _render
+
+ def hg_proxy(self):
+ @wsgiapp
+ def _hg_proxy(environ, start_response):
+ app = WsgiProxy(self.remote_wsgi.HgRemoteWsgi())
+ return app(environ, start_response)
+ return _hg_proxy
+
+ def git_proxy(self):
+ @wsgiapp
+ def _git_proxy(environ, start_response):
+ app = WsgiProxy(self.remote_wsgi.GitRemoteWsgi())
+ return app(environ, start_response)
+ return _git_proxy
+
+ def hg_stream(self):
+ if self._use_echo_app:
+ @wsgiapp
+ def _hg_stream(environ, start_response):
+ app = EchoApp('fake_path', 'fake_name', None)
+ return app(environ, start_response)
+ return _hg_stream
+ else:
+ @wsgiapp
+ def _hg_stream(environ, start_response):
+ repo_path = environ['HTTP_X_RC_REPO_PATH']
+ repo_name = environ['HTTP_X_RC_REPO_NAME']
+ packed_config = base64.b64decode(
+ environ['HTTP_X_RC_REPO_CONFIG'])
+ config = msgpack.unpackb(packed_config)
+ app = scm_app.create_hg_wsgi_app(
+ repo_path, repo_name, config)
+
+ # Consitent path information for hgweb
+ environ['PATH_INFO'] = environ['HTTP_X_RC_PATH_INFO']
+ environ['REPO_NAME'] = repo_name
+ return app(environ, ResponseFilter(start_response))
+ return _hg_stream
+
+ def git_stream(self):
+ if self._use_echo_app:
+ @wsgiapp
+ def _git_stream(environ, start_response):
+ app = EchoApp('fake_path', 'fake_name', None)
+ return app(environ, start_response)
+ return _git_stream
+ else:
+ @wsgiapp
+ def _git_stream(environ, start_response):
+ repo_path = environ['HTTP_X_RC_REPO_PATH']
+ repo_name = environ['HTTP_X_RC_REPO_NAME']
+ packed_config = base64.b64decode(
+ environ['HTTP_X_RC_REPO_CONFIG'])
+ config = msgpack.unpackb(packed_config)
+
+ environ['PATH_INFO'] = environ['HTTP_X_RC_PATH_INFO']
+ app = scm_app.create_git_wsgi_app(
+ repo_path, repo_name, config)
+ return app(environ, start_response)
+ return _git_stream
+
+
+class ResponseFilter(object):
+
+ def __init__(self, start_response):
+ self._start_response = start_response
+
+ def __call__(self, status, response_headers, exc_info=None):
+ headers = tuple(
+ (h, v) for h, v in response_headers
+ if not wsgiref.util.is_hop_by_hop(h))
+ return self._start_response(status, headers, exc_info)
+
+
+def main(global_config, **settings):
+ app = HTTPApplication(settings=settings)
+ return app.wsgi_app()
diff --git a/vcsserver/main.py b/vcsserver/main.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/main.py
@@ -0,0 +1,507 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import atexit
+import locale
+import logging
+import optparse
+import os
+import textwrap
+import threading
+import sys
+
+import configobj
+import Pyro4
+from beaker.cache import CacheManager
+from beaker.util import parse_cache_config_options
+
+try:
+ from vcsserver.git import GitFactory, GitRemote
+except ImportError:
+ GitFactory = None
+ GitRemote = None
+try:
+ from vcsserver.hg import MercurialFactory, HgRemote
+except ImportError:
+ MercurialFactory = None
+ HgRemote = None
+try:
+ from vcsserver.svn import SubversionFactory, SvnRemote
+except ImportError:
+ SubversionFactory = None
+ SvnRemote = None
+
+from server import VcsServer
+from vcsserver import hgpatches, remote_wsgi, settings
+from vcsserver.echo_stub import remote_wsgi as remote_wsgi_stub
+
+log = logging.getLogger(__name__)
+
+HERE = os.path.dirname(os.path.abspath(__file__))
+SERVER_RUNNING_FILE = None
+
+
+# HOOKS - inspired by gunicorn #
+
+def when_ready(server):
+ """
+ Called just after the server is started.
+ """
+
+ def _remove_server_running_file():
+ if os.path.isfile(SERVER_RUNNING_FILE):
+ os.remove(SERVER_RUNNING_FILE)
+
+ # top up to match to level location
+ if SERVER_RUNNING_FILE:
+ with open(SERVER_RUNNING_FILE, 'wb') as f:
+ f.write(str(os.getpid()))
+ # register cleanup of that file when server exits
+ atexit.register(_remove_server_running_file)
+
+
+class LazyWriter(object):
+ """
+ File-like object that opens a file lazily when it is first written
+ to.
+ """
+
+ def __init__(self, filename, mode='w'):
+ self.filename = filename
+ self.fileobj = None
+ self.lock = threading.Lock()
+ self.mode = mode
+
+ def open(self):
+ if self.fileobj is None:
+ with self.lock:
+ self.fileobj = open(self.filename, self.mode)
+ return self.fileobj
+
+ def close(self):
+ fileobj = self.fileobj
+ if fileobj is not None:
+ fileobj.close()
+
+ def __del__(self):
+ self.close()
+
+ def write(self, text):
+ fileobj = self.open()
+ fileobj.write(text)
+ fileobj.flush()
+
+ def writelines(self, text):
+ fileobj = self.open()
+ fileobj.writelines(text)
+ fileobj.flush()
+
+ def flush(self):
+ self.open().flush()
+
+
+class Application(object):
+ """
+ Represents the vcs server application.
+
+ This object is responsible to initialize the application and all needed
+ libraries. After that it hooks together the different objects and provides
+ them a way to access things like configuration.
+ """
+
+ def __init__(
+ self, host, port=None, locale='', threadpool_size=None,
+ timeout=None, cache_config=None, remote_wsgi_=None):
+
+ self.host = host
+ self.port = int(port) or settings.PYRO_PORT
+ self.threadpool_size = (
+ int(threadpool_size) if threadpool_size else None)
+ self.locale = locale
+ self.timeout = timeout
+ self.cache_config = cache_config
+ self.remote_wsgi = remote_wsgi_ or remote_wsgi
+
+ def init(self):
+ """
+ Configure and hook together all relevant objects.
+ """
+ self._configure_locale()
+ self._configure_pyro()
+ self._initialize_cache()
+ self._create_daemon_and_remote_objects(host=self.host, port=self.port)
+
+ def run(self):
+ """
+ Start the main loop of the application.
+ """
+
+ if hasattr(os, 'getpid'):
+ log.info('Starting %s in PID %i.', __name__, os.getpid())
+ else:
+ log.info('Starting %s.', __name__)
+ if SERVER_RUNNING_FILE:
+ log.info('PID file written as %s', SERVER_RUNNING_FILE)
+ else:
+ log.info('No PID file written by default.')
+ when_ready(self)
+ try:
+ self._pyrodaemon.requestLoop(
+ loopCondition=lambda: not self._vcsserver._shutdown)
+ finally:
+ self._pyrodaemon.shutdown()
+
+ def _configure_locale(self):
+ if self.locale:
+ log.info('Settings locale: `LC_ALL` to %s' % self.locale)
+ else:
+ log.info(
+ 'Configuring locale subsystem based on environment variables')
+
+ try:
+ # If self.locale is the empty string, then the locale
+ # module will use the environment variables. See the
+ # documentation of the package `locale`.
+ locale.setlocale(locale.LC_ALL, self.locale)
+
+ language_code, encoding = locale.getlocale()
+ log.info(
+ 'Locale set to language code "%s" with encoding "%s".',
+ language_code, encoding)
+ except locale.Error:
+ log.exception(
+ 'Cannot set locale, not configuring the locale system')
+
+ def _configure_pyro(self):
+ if self.threadpool_size is not None:
+ log.info("Threadpool size set to %s", self.threadpool_size)
+ Pyro4.config.THREADPOOL_SIZE = self.threadpool_size
+ if self.timeout not in (None, 0, 0.0, '0'):
+ log.info("Timeout for RPC calls set to %s seconds", self.timeout)
+ Pyro4.config.COMMTIMEOUT = float(self.timeout)
+ Pyro4.config.SERIALIZER = 'pickle'
+ Pyro4.config.SERIALIZERS_ACCEPTED.add('pickle')
+ Pyro4.config.SOCK_REUSE = True
+ # Uncomment the next line when you need to debug remote errors
+ # Pyro4.config.DETAILED_TRACEBACK = True
+
+ def _initialize_cache(self):
+ cache_config = parse_cache_config_options(self.cache_config)
+ log.info('Initializing beaker cache: %s' % cache_config)
+ self.cache = CacheManager(**cache_config)
+
+ def _create_daemon_and_remote_objects(self, host='localhost',
+ port=settings.PYRO_PORT):
+ daemon = Pyro4.Daemon(host=host, port=port)
+
+ self._vcsserver = VcsServer()
+ uri = daemon.register(
+ self._vcsserver, objectId=settings.PYRO_VCSSERVER)
+ log.info("Object registered = %s", uri)
+
+ if GitFactory and GitRemote:
+ git_repo_cache = self.cache.get_cache_region('git', region='repo_object')
+ git_factory = GitFactory(git_repo_cache)
+ self._git_remote = GitRemote(git_factory)
+ uri = daemon.register(self._git_remote, objectId=settings.PYRO_GIT)
+ log.info("Object registered = %s", uri)
+ else:
+ log.info("Git client import failed")
+
+ if MercurialFactory and HgRemote:
+ hg_repo_cache = self.cache.get_cache_region('hg', region='repo_object')
+ hg_factory = MercurialFactory(hg_repo_cache)
+ self._hg_remote = HgRemote(hg_factory)
+ uri = daemon.register(self._hg_remote, objectId=settings.PYRO_HG)
+ log.info("Object registered = %s", uri)
+ else:
+ log.info("Mercurial client import failed")
+
+ if SubversionFactory and SvnRemote:
+ svn_repo_cache = self.cache.get_cache_region('svn', region='repo_object')
+ svn_factory = SubversionFactory(svn_repo_cache)
+ self._svn_remote = SvnRemote(svn_factory, hg_factory=hg_factory)
+ uri = daemon.register(self._svn_remote, objectId=settings.PYRO_SVN)
+ log.info("Object registered = %s", uri)
+ else:
+ log.info("Subversion client import failed")
+
+ self._git_remote_wsgi = self.remote_wsgi.GitRemoteWsgi()
+ uri = daemon.register(self._git_remote_wsgi,
+ objectId=settings.PYRO_GIT_REMOTE_WSGI)
+ log.info("Object registered = %s", uri)
+
+ self._hg_remote_wsgi = self.remote_wsgi.HgRemoteWsgi()
+ uri = daemon.register(self._hg_remote_wsgi,
+ objectId=settings.PYRO_HG_REMOTE_WSGI)
+ log.info("Object registered = %s", uri)
+
+ self._pyrodaemon = daemon
+
+
+class VcsServerCommand(object):
+
+ usage = '%prog'
+ description = """
+ Runs the VCS server
+ """
+ default_verbosity = 1
+
+ parser = optparse.OptionParser(
+ usage,
+ description=textwrap.dedent(description)
+ )
+ parser.add_option(
+ '--host',
+ type="str",
+ dest="host",
+ )
+ parser.add_option(
+ '--port',
+ type="int",
+ dest="port"
+ )
+ parser.add_option(
+ '--running-file',
+ dest='running_file',
+ metavar='RUNNING_FILE',
+ help="Create a running file after the server is initalized with "
+ "stored PID of process"
+ )
+ parser.add_option(
+ '--locale',
+ dest='locale',
+ help="Allows to set the locale, e.g. en_US.UTF-8",
+ default=""
+ )
+ parser.add_option(
+ '--log-file',
+ dest='log_file',
+ metavar='LOG_FILE',
+ help="Save output to the given log file (redirects stdout)"
+ )
+ parser.add_option(
+ '--log-level',
+ dest="log_level",
+ metavar="LOG_LEVEL",
+ help="use LOG_LEVEL to set log level "
+ "(debug,info,warning,error,critical)"
+ )
+ parser.add_option(
+ '--threadpool',
+ dest='threadpool_size',
+ type='int',
+ help="Set the size of the threadpool used to communicate with the "
+ "WSGI workers. This should be at least 6 times the number of "
+ "WSGI worker processes."
+ )
+ parser.add_option(
+ '--timeout',
+ dest='timeout',
+ type='float',
+ help="Set the timeout for RPC communication in seconds."
+ )
+ parser.add_option(
+ '--config',
+ dest='config_file',
+ type='string',
+ help="Configuration file for vcsserver."
+ )
+
+ def __init__(self, argv, quiet=False):
+ self.options, self.args = self.parser.parse_args(argv[1:])
+ if quiet:
+ self.options.verbose = 0
+
+ def _get_file_config(self):
+ ini_conf = {}
+ conf = configobj.ConfigObj(self.options.config_file)
+ if 'DEFAULT' in conf:
+ ini_conf = conf['DEFAULT']
+
+ return ini_conf
+
+ def _show_config(self, vcsserver_config):
+ order = [
+ 'config_file',
+ 'host',
+ 'port',
+ 'log_file',
+ 'log_level',
+ 'locale',
+ 'threadpool_size',
+ 'timeout',
+ 'cache_config',
+ ]
+
+ def sorter(k):
+ return dict([(y, x) for x, y in enumerate(order)]).get(k)
+
+ _config = []
+ for k in sorted(vcsserver_config.keys(), key=sorter):
+ v = vcsserver_config[k]
+ # construct padded key for display eg %-20s % = key: val
+ k_formatted = ('%-'+str(len(max(order, key=len))+1)+'s') % (k+':')
+ _config.append(' * %s %s' % (k_formatted, v))
+ log.info('\n[vcsserver configuration]:\n'+'\n'.join(_config))
+
+ def _get_vcsserver_configuration(self):
+ _defaults = {
+ 'config_file': None,
+ 'git_path': 'git',
+ 'host': 'localhost',
+ 'port': settings.PYRO_PORT,
+ 'log_file': None,
+ 'log_level': 'debug',
+ 'locale': None,
+ 'threadpool_size': 16,
+ 'timeout': None,
+
+ # Development support
+ 'dev.use_echo_app': False,
+
+ # caches, baker style config
+ 'beaker.cache.regions': 'repo_object',
+ 'beaker.cache.repo_object.expire': '10',
+ 'beaker.cache.repo_object.type': 'memory',
+ }
+ config = {}
+ config.update(_defaults)
+ # overwrite defaults with one loaded from file
+ config.update(self._get_file_config())
+
+ # overwrite with self.option which has the top priority
+ for k, v in self.options.__dict__.items():
+ if v or v == 0:
+ config[k] = v
+
+ # clear all "extra" keys if they are somehow passed,
+ # we only want defaults, so any extra stuff from self.options is cleared
+ # except beaker stuff which needs to be dynamic
+ for k in [k for k in config.copy().keys() if not k.startswith('beaker.cache.')]:
+ if k not in _defaults:
+ del config[k]
+
+ # group together the cache into one key.
+ # Needed further for beaker lib configuration
+ _k = {}
+ for k in [k for k in config.copy() if k.startswith('beaker.cache.')]:
+ _k[k] = config.pop(k)
+ config['cache_config'] = _k
+
+ return config
+
+ def out(self, msg): # pragma: no cover
+ if self.options.verbose > 0:
+ print(msg)
+
+ def run(self): # pragma: no cover
+ vcsserver_config = self._get_vcsserver_configuration()
+
+ # Ensure the log file is writeable
+ if vcsserver_config['log_file']:
+ stdout_log = self._configure_logfile()
+ else:
+ stdout_log = None
+
+ # set PID file with running lock
+ if self.options.running_file:
+ global SERVER_RUNNING_FILE
+ SERVER_RUNNING_FILE = self.options.running_file
+
+ # configure logging, and logging based on configuration file
+ self._configure_logging(level=vcsserver_config['log_level'],
+ stream=stdout_log)
+ if self.options.config_file:
+ if not os.path.isfile(self.options.config_file):
+ raise OSError('File %s does not exist' %
+ self.options.config_file)
+
+ self._configure_file_logging(self.options.config_file)
+
+ self._configure_settings(vcsserver_config)
+
+ # display current configuration of vcsserver
+ self._show_config(vcsserver_config)
+
+ if not vcsserver_config['dev.use_echo_app']:
+ remote_wsgi_mod = remote_wsgi
+ else:
+ log.warning("Using EchoApp for VCS endpoints.")
+ remote_wsgi_mod = remote_wsgi_stub
+
+ app = Application(
+ host=vcsserver_config['host'],
+ port=vcsserver_config['port'],
+ locale=vcsserver_config['locale'],
+ threadpool_size=vcsserver_config['threadpool_size'],
+ timeout=vcsserver_config['timeout'],
+ cache_config=vcsserver_config['cache_config'],
+ remote_wsgi_=remote_wsgi_mod)
+ app.init()
+ app.run()
+
+ def _configure_logging(self, level, stream=None):
+ _format = (
+ '%(asctime)s.%(msecs)03d %(levelname)-5.5s [%(name)s] %(message)s')
+ levels = {
+ 'debug': logging.DEBUG,
+ 'info': logging.INFO,
+ 'warning': logging.WARNING,
+ 'error': logging.ERROR,
+ 'critical': logging.CRITICAL,
+ }
+ try:
+ level = levels[level]
+ except KeyError:
+ raise AttributeError(
+ 'Invalid log level please use one of %s' % (levels.keys(),))
+ logging.basicConfig(format=_format, stream=stream, level=level)
+ logging.getLogger('Pyro4').setLevel(level)
+
+ def _configure_file_logging(self, config):
+ import logging.config
+ try:
+ logging.config.fileConfig(config)
+ except Exception as e:
+ log.warning('Failed to configure logging based on given '
+ 'config file. Error: %s' % e)
+
+ def _configure_logfile(self):
+ try:
+ writeable_log_file = open(self.options.log_file, 'a')
+ except IOError as ioe:
+ msg = 'Error: Unable to write to log file: %s' % ioe
+ raise ValueError(msg)
+ writeable_log_file.close()
+ stdout_log = LazyWriter(self.options.log_file, 'a')
+ sys.stdout = stdout_log
+ sys.stderr = stdout_log
+ return stdout_log
+
+ def _configure_settings(self, config):
+ """
+ Configure the settings module based on the given `config`.
+ """
+ settings.GIT_EXECUTABLE = config['git_path']
+
+
+def main(argv=sys.argv, quiet=False):
+ if MercurialFactory:
+ hgpatches.patch_largefiles_capabilities()
+ command = VcsServerCommand(argv, quiet=quiet)
+ return command.run()
diff --git a/vcsserver/pygrack.py b/vcsserver/pygrack.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/pygrack.py
@@ -0,0 +1,375 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+"""Handles the Git smart protocol."""
+
+import os
+import socket
+import logging
+
+import simplejson as json
+import dulwich.protocol
+from webob import Request, Response, exc
+
+from vcsserver import hooks, subprocessio
+
+
+log = logging.getLogger(__name__)
+
+
+class FileWrapper(object):
+ """File wrapper that ensures how much data is read from it."""
+
+ def __init__(self, fd, content_length):
+ self.fd = fd
+ self.content_length = content_length
+ self.remain = content_length
+
+ def read(self, size):
+ if size <= self.remain:
+ try:
+ data = self.fd.read(size)
+ except socket.error:
+ raise IOError(self)
+ self.remain -= size
+ elif self.remain:
+ data = self.fd.read(self.remain)
+ self.remain = 0
+ else:
+ data = None
+ return data
+
+ def __repr__(self):
+ return '' % (
+ self.fd, self.content_length, self.content_length - self.remain
+ )
+
+
+class GitRepository(object):
+ """WSGI app for handling Git smart protocol endpoints."""
+
+ git_folder_signature = frozenset(
+ ('config', 'head', 'info', 'objects', 'refs'))
+ commands = frozenset(('git-upload-pack', 'git-receive-pack'))
+ valid_accepts = frozenset(('application/x-%s-result' %
+ c for c in commands))
+
+ # The last bytes are the SHA1 of the first 12 bytes.
+ EMPTY_PACK = (
+ 'PACK\x00\x00\x00\x02\x00\x00\x00\x00' +
+ '\x02\x9d\x08\x82;\xd8\xa8\xea\xb5\x10\xadj\xc7\\\x82<\xfd>\xd3\x1e'
+ )
+ SIDE_BAND_CAPS = frozenset(('side-band', 'side-band-64k'))
+
+ def __init__(self, repo_name, content_path, git_path, update_server_info,
+ extras):
+ files = frozenset(f.lower() for f in os.listdir(content_path))
+ valid_dir_signature = self.git_folder_signature.issubset(files)
+
+ if not valid_dir_signature:
+ raise OSError('%s missing git signature' % content_path)
+
+ self.content_path = content_path
+ self.repo_name = repo_name
+ self.extras = extras
+ self.git_path = git_path
+ self.update_server_info = update_server_info
+
+ def _get_fixedpath(self, path):
+ """
+ Small fix for repo_path
+
+ :param path:
+ """
+ return path.split(self.repo_name, 1)[-1].strip('/')
+
+ def inforefs(self, request, unused_environ):
+ """
+ WSGI Response producer for HTTP GET Git Smart
+ HTTP /info/refs request.
+ """
+
+ git_command = request.GET.get('service')
+ if git_command not in self.commands:
+ log.debug('command %s not allowed', git_command)
+ return exc.HTTPForbidden()
+
+ # please, resist the urge to add '\n' to git capture and increment
+ # line count by 1.
+ # by git docs: Documentation/technical/http-protocol.txt#L214 \n is
+ # a part of protocol.
+ # The code in Git client not only does NOT need '\n', but actually
+ # blows up if you sprinkle "flush" (0000) as "0001\n".
+ # It reads binary, per number of bytes specified.
+ # if you do add '\n' as part of data, count it.
+ server_advert = '# service=%s\n' % git_command
+ packet_len = str(hex(len(server_advert) + 4)[2:].rjust(4, '0')).lower()
+ try:
+ gitenv = dict(os.environ)
+ # forget all configs
+ gitenv['RC_SCM_DATA'] = json.dumps(self.extras)
+ command = [self.git_path, git_command[4:], '--stateless-rpc',
+ '--advertise-refs', self.content_path]
+ out = subprocessio.SubprocessIOChunker(
+ command,
+ env=gitenv,
+ starting_values=[packet_len + server_advert + '0000'],
+ shell=False
+ )
+ except EnvironmentError:
+ log.exception('Error processing command')
+ raise exc.HTTPExpectationFailed()
+
+ resp = Response()
+ resp.content_type = 'application/x-%s-advertisement' % str(git_command)
+ resp.charset = None
+ resp.app_iter = out
+
+ return resp
+
+ def _get_want_capabilities(self, request):
+ """Read the capabilities found in the first want line of the request."""
+ pos = request.body_file_seekable.tell()
+ first_line = request.body_file_seekable.readline()
+ request.body_file_seekable.seek(pos)
+
+ return frozenset(
+ dulwich.protocol.extract_want_line_capabilities(first_line)[1])
+
+ def _build_failed_pre_pull_response(self, capabilities, pre_pull_messages):
+ """
+ Construct a response with an empty PACK file.
+
+ We use an empty PACK file, as that would trigger the failure of the pull
+ or clone command.
+
+ We also print in the error output a message explaining why the command
+ was aborted.
+
+ If aditionally, the user is accepting messages we send them the output
+ of the pre-pull hook.
+
+ Note that for clients not supporting side-band we just send them the
+ emtpy PACK file.
+ """
+ if self.SIDE_BAND_CAPS.intersection(capabilities):
+ response = []
+ proto = dulwich.protocol.Protocol(None, response.append)
+ proto.write_pkt_line('NAK\n')
+ self._write_sideband_to_proto(pre_pull_messages, proto,
+ capabilities)
+ # N.B.(skreft): Do not change the sideband channel to 3, as that
+ # produces a fatal error in the client:
+ # fatal: error in sideband demultiplexer
+ proto.write_sideband(2, 'Pre pull hook failed: aborting\n')
+ proto.write_sideband(1, self.EMPTY_PACK)
+
+ # writes 0000
+ proto.write_pkt_line(None)
+
+ return response
+ else:
+ return [self.EMPTY_PACK]
+
+ def _write_sideband_to_proto(self, data, proto, capabilities):
+ """
+ Write the data to the proto's sideband number 2.
+
+ We do not use dulwich's write_sideband directly as it only supports
+ side-band-64k.
+ """
+ if not data:
+ return
+
+ # N.B.(skreft): The values below are explained in the pack protocol
+ # documentation, section Packfile Data.
+ # https://github.com/git/git/blob/master/Documentation/technical/pack-protocol.txt
+ if 'side-band-64k' in capabilities:
+ chunk_size = 65515
+ elif 'side-band' in capabilities:
+ chunk_size = 995
+ else:
+ return
+
+ chunker = (
+ data[i:i + chunk_size] for i in xrange(0, len(data), chunk_size))
+
+ for chunk in chunker:
+ proto.write_sideband(2, chunk)
+
+ def _get_messages(self, data, capabilities):
+ """Return a list with packets for sending data in sideband number 2."""
+ response = []
+ proto = dulwich.protocol.Protocol(None, response.append)
+
+ self._write_sideband_to_proto(data, proto, capabilities)
+
+ return response
+
+ def _inject_messages_to_response(self, response, capabilities,
+ start_messages, end_messages):
+ """
+ Given a list reponse we inject the pre/post-pull messages.
+
+ We only inject the messages if the client supports sideband, and the
+ response has the format:
+ 0008NAK\n...0000
+
+ Note that we do not check the no-progress capability as by default, git
+ sends it, which effectively would block all messages.
+ """
+ if not self.SIDE_BAND_CAPS.intersection(capabilities):
+ return response
+
+ if (not response[0].startswith('0008NAK\n') or
+ not response[-1].endswith('0000')):
+ return response
+
+ if not start_messages and not end_messages:
+ return response
+
+ new_response = ['0008NAK\n']
+ new_response.extend(self._get_messages(start_messages, capabilities))
+ if len(response) == 1:
+ new_response.append(response[0][8:-4])
+ else:
+ new_response.append(response[0][8:])
+ new_response.extend(response[1:-1])
+ new_response.append(response[-1][:-4])
+ new_response.extend(self._get_messages(end_messages, capabilities))
+ new_response.append('0000')
+
+ return new_response
+
+ def backend(self, request, environ):
+ """
+ WSGI Response producer for HTTP POST Git Smart HTTP requests.
+ Reads commands and data from HTTP POST's body.
+ returns an iterator obj with contents of git command's
+ response to stdout
+ """
+ # TODO(skreft): think how we could detect an HTTPLockedException, as
+ # we probably want to have the same mechanism used by mercurial and
+ # simplevcs.
+ # For that we would need to parse the output of the command looking for
+ # some signs of the HTTPLockedError, parse the data and reraise it in
+ # pygrack. However, that would interfere with the streaming.
+ #
+ # Now the output of a blocked push is:
+ # Pushing to http://test_regular:test12@127.0.0.1:5001/vcs_test_git
+ # POST git-receive-pack (1047 bytes)
+ # remote: ERROR: Repository `vcs_test_git` locked by user `test_admin`. Reason:`lock_auto`
+ # To http://test_regular:test12@127.0.0.1:5001/vcs_test_git
+ # ! [remote rejected] master -> master (pre-receive hook declined)
+ # error: failed to push some refs to 'http://test_regular:test12@127.0.0.1:5001/vcs_test_git'
+
+ git_command = self._get_fixedpath(request.path_info)
+ if git_command not in self.commands:
+ log.debug('command %s not allowed', git_command)
+ return exc.HTTPForbidden()
+
+ capabilities = None
+ if git_command == 'git-upload-pack':
+ capabilities = self._get_want_capabilities(request)
+
+ if 'CONTENT_LENGTH' in environ:
+ inputstream = FileWrapper(request.body_file_seekable,
+ request.content_length)
+ else:
+ inputstream = request.body_file_seekable
+
+ resp = Response()
+ resp.content_type = ('application/x-%s-result' %
+ git_command.encode('utf8'))
+ resp.charset = None
+
+ if git_command == 'git-upload-pack':
+ status, pre_pull_messages = hooks.git_pre_pull(self.extras)
+ if status != 0:
+ resp.app_iter = self._build_failed_pre_pull_response(
+ capabilities, pre_pull_messages)
+ return resp
+
+ gitenv = dict(os.environ)
+ # forget all configs
+ gitenv['GIT_CONFIG_NOGLOBAL'] = '1'
+ gitenv['RC_SCM_DATA'] = json.dumps(self.extras)
+ cmd = [self.git_path, git_command[4:], '--stateless-rpc',
+ self.content_path]
+ log.debug('handling cmd %s', cmd)
+
+ out = subprocessio.SubprocessIOChunker(
+ cmd,
+ inputstream=inputstream,
+ env=gitenv,
+ cwd=self.content_path,
+ shell=False,
+ fail_on_stderr=False,
+ fail_on_return_code=False
+ )
+
+ if self.update_server_info and git_command == 'git-receive-pack':
+ # We need to fully consume the iterator here, as the
+ # update-server-info command needs to be run after the push.
+ out = list(out)
+
+ # Updating refs manually after each push.
+ # This is required as some clients are exposing Git repos internally
+ # with the dumb protocol.
+ cmd = [self.git_path, 'update-server-info']
+ log.debug('handling cmd %s', cmd)
+ output = subprocessio.SubprocessIOChunker(
+ cmd,
+ inputstream=inputstream,
+ env=gitenv,
+ cwd=self.content_path,
+ shell=False,
+ fail_on_stderr=False,
+ fail_on_return_code=False
+ )
+ # Consume all the output so the subprocess finishes
+ for _ in output:
+ pass
+
+ if git_command == 'git-upload-pack':
+ out = list(out)
+ unused_status, post_pull_messages = hooks.git_post_pull(self.extras)
+ resp.app_iter = self._inject_messages_to_response(
+ out, capabilities, pre_pull_messages, post_pull_messages)
+ else:
+ resp.app_iter = out
+
+ return resp
+
+ def __call__(self, environ, start_response):
+ request = Request(environ)
+ _path = self._get_fixedpath(request.path_info)
+ if _path.startswith('info/refs'):
+ app = self.inforefs
+ else:
+ app = self.backend
+
+ try:
+ resp = app(request, environ)
+ except exc.HTTPException as error:
+ log.exception('HTTP Error')
+ resp = error
+ except Exception:
+ log.exception('Unknown error')
+ resp = exc.HTTPInternalServerError()
+
+ return resp(environ, start_response)
diff --git a/vcsserver/remote_wsgi.py b/vcsserver/remote_wsgi.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/remote_wsgi.py
@@ -0,0 +1,34 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+from vcsserver import scm_app, wsgi_app_caller
+
+
+class GitRemoteWsgi(object):
+ def handle(self, environ, input_data, *args, **kwargs):
+ app = wsgi_app_caller.WSGIAppCaller(
+ scm_app.create_git_wsgi_app(*args, **kwargs))
+
+ return app.handle(environ, input_data)
+
+
+class HgRemoteWsgi(object):
+ def handle(self, environ, input_data, *args, **kwargs):
+ app = wsgi_app_caller.WSGIAppCaller(
+ scm_app.create_hg_wsgi_app(*args, **kwargs))
+
+ return app.handle(environ, input_data)
diff --git a/vcsserver/scm_app.py b/vcsserver/scm_app.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/scm_app.py
@@ -0,0 +1,174 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import logging
+import os
+
+import mercurial
+import mercurial.error
+import mercurial.hgweb.common
+import mercurial.hgweb.hgweb_mod
+import mercurial.hgweb.protocol
+import webob.exc
+
+from vcsserver import pygrack, exceptions, settings
+
+
+log = logging.getLogger(__name__)
+
+
+# propagated from mercurial documentation
+HG_UI_SECTIONS = [
+ 'alias', 'auth', 'decode/encode', 'defaults', 'diff', 'email', 'extensions',
+ 'format', 'merge-patterns', 'merge-tools', 'hooks', 'http_proxy', 'smtp',
+ 'patch', 'paths', 'profiling', 'server', 'trusted', 'ui', 'web',
+]
+
+
+class HgWeb(mercurial.hgweb.hgweb_mod.hgweb):
+ """Extension of hgweb that simplifies some functions."""
+
+ def _get_view(self, repo):
+ """Views are not supported."""
+ return repo
+
+ def loadsubweb(self):
+ """The result is only used in the templater method which is not used."""
+ return None
+
+ def run(self):
+ """Unused function so raise an exception if accidentally called."""
+ raise NotImplementedError
+
+ def templater(self, req):
+ """Function used in an unreachable code path.
+
+ This code is unreachable because we guarantee that the HTTP request,
+ corresponds to a Mercurial command. See the is_hg method. So, we are
+ never going to get a user-visible url.
+ """
+ raise NotImplementedError
+
+ def archivelist(self, nodeid):
+ """Unused function so raise an exception if accidentally called."""
+ raise NotImplementedError
+
+ def run_wsgi(self, req):
+ """Check the request has a valid command, failing fast otherwise."""
+ cmd = req.form.get('cmd', [''])[0]
+ if not mercurial.hgweb.protocol.iscmd(cmd):
+ req.respond(
+ mercurial.hgweb.common.ErrorResponse(
+ mercurial.hgweb.common.HTTP_BAD_REQUEST),
+ mercurial.hgweb.protocol.HGTYPE
+ )
+ return ['']
+
+ return super(HgWeb, self).run_wsgi(req)
+
+
+def make_hg_ui_from_config(repo_config):
+ baseui = mercurial.ui.ui()
+
+ # clean the baseui object
+ baseui._ocfg = mercurial.config.config()
+ baseui._ucfg = mercurial.config.config()
+ baseui._tcfg = mercurial.config.config()
+
+ for section, option, value in repo_config:
+ baseui.setconfig(section, option, value)
+
+ # make our hgweb quiet so it doesn't print output
+ baseui.setconfig('ui', 'quiet', 'true')
+
+ return baseui
+
+
+def update_hg_ui_from_hgrc(baseui, repo_path):
+ path = os.path.join(repo_path, '.hg', 'hgrc')
+
+ if not os.path.isfile(path):
+ log.debug('hgrc file is not present at %s, skipping...', path)
+ return
+ log.debug('reading hgrc from %s', path)
+ cfg = mercurial.config.config()
+ cfg.read(path)
+ for section in HG_UI_SECTIONS:
+ for k, v in cfg.items(section):
+ log.debug('settings ui from file: [%s] %s=%s', section, k, v)
+ baseui.setconfig(section, k, v)
+
+
+def create_hg_wsgi_app(repo_path, repo_name, config):
+ """
+ Prepares a WSGI application to handle Mercurial requests.
+
+ :param config: is a list of 3-item tuples representing a ConfigObject
+ (it is the serialized version of the config object).
+ """
+ log.debug("Creating Mercurial WSGI application")
+
+ baseui = make_hg_ui_from_config(config)
+ update_hg_ui_from_hgrc(baseui, repo_path)
+
+ try:
+ return HgWeb(repo_path, name=repo_name, baseui=baseui)
+ except mercurial.error.RequirementError as exc:
+ raise exceptions.RequirementException(exc)
+
+
+class GitHandler(object):
+ def __init__(self, repo_location, repo_name, git_path, update_server_info,
+ extras):
+ if not os.path.isdir(repo_location):
+ raise OSError(repo_location)
+ self.content_path = repo_location
+ self.repo_name = repo_name
+ self.repo_location = repo_location
+ self.extras = extras
+ self.git_path = git_path
+ self.update_server_info = update_server_info
+
+ def __call__(self, environ, start_response):
+ app = webob.exc.HTTPNotFound()
+ candidate_paths = (
+ self.content_path, os.path.join(self.content_path, '.git'))
+
+ for content_path in candidate_paths:
+ try:
+ app = pygrack.GitRepository(
+ self.repo_name, content_path, self.git_path,
+ self.update_server_info, self.extras)
+ break
+ except OSError:
+ continue
+
+ return app(environ, start_response)
+
+
+def create_git_wsgi_app(repo_path, repo_name, config):
+ """
+ Creates a WSGI application to handle Git requests.
+
+ :param config: is a dictionary holding the extras.
+ """
+ git_path = settings.GIT_EXECUTABLE
+ update_server_info = config.pop('git_update_server_info')
+ app = GitHandler(
+ repo_path, repo_name, git_path, update_server_info, config)
+
+ return app
diff --git a/vcsserver/server.py b/vcsserver/server.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/server.py
@@ -0,0 +1,78 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+import gc
+import logging
+import os
+import time
+
+
+log = logging.getLogger(__name__)
+
+
+class VcsServer(object):
+ """
+ Exposed remote interface of the vcsserver itself.
+
+ This object can be used to manage the server remotely. Right now the main
+ use case is to allow to shut down the server.
+ """
+
+ _shutdown = False
+
+ def shutdown(self):
+ self._shutdown = True
+
+ def ping(self):
+ """
+ Utility to probe a server connection.
+ """
+ log.debug("Received server ping.")
+
+ def echo(self, data):
+ """
+ Utility for performance testing.
+
+ Allows to pass in arbitrary data and will return this data.
+ """
+ log.debug("Received server echo.")
+ return data
+
+ def sleep(self, seconds):
+ """
+ Utility to simulate long running server interaction.
+ """
+ log.debug("Sleeping %s seconds", seconds)
+ time.sleep(seconds)
+
+ def get_pid(self):
+ """
+ Allows to discover the PID based on a proxy object.
+ """
+ return os.getpid()
+
+ def run_gc(self):
+ """
+ Allows to trigger the garbage collector.
+
+ Main intention is to support statistics gathering during test runs.
+ """
+ freed_objects = gc.collect()
+ return {
+ 'freed_objects': freed_objects,
+ 'garbage': len(gc.garbage),
+ }
diff --git a/vcsserver/settings.py b/vcsserver/settings.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/settings.py
@@ -0,0 +1,30 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+
+PYRO_PORT = 9900
+
+PYRO_GIT = 'git_remote'
+PYRO_HG = 'hg_remote'
+PYRO_SVN = 'svn_remote'
+PYRO_VCSSERVER = 'vcs_server'
+PYRO_GIT_REMOTE_WSGI = 'git_remote_wsgi'
+PYRO_HG_REMOTE_WSGI = 'hg_remote_wsgi'
+
+WIRE_ENCODING = 'UTF-8'
+
+GIT_EXECUTABLE = 'git'
diff --git a/vcsserver/subprocessio.py b/vcsserver/subprocessio.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/subprocessio.py
@@ -0,0 +1,476 @@
+"""
+Module provides a class allowing to wrap communication over subprocess.Popen
+input, output, error streams into a meaningfull, non-blocking, concurrent
+stream processor exposing the output data as an iterator fitting to be a
+return value passed by a WSGI applicaiton to a WSGI server per PEP 3333.
+
+Copyright (c) 2011 Daniel Dotsenko
+
+This file is part of git_http_backend.py Project.
+
+git_http_backend.py Project is free software: you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public License as
+published by the Free Software Foundation, either version 2.1 of the License,
+or (at your option) any later version.
+
+git_http_backend.py Project is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public License
+along with git_http_backend.py Project.
+If not, see .
+"""
+import os
+import subprocess32 as subprocess
+from collections import deque
+from threading import Event, Thread
+
+
+class StreamFeeder(Thread):
+ """
+ Normal writing into pipe-like is blocking once the buffer is filled.
+ This thread allows a thread to seep data from a file-like into a pipe
+ without blocking the main thread.
+ We close inpipe once the end of the source stream is reached.
+ """
+
+ def __init__(self, source):
+ super(StreamFeeder, self).__init__()
+ self.daemon = True
+ filelike = False
+ self.bytes = bytes()
+ if type(source) in (type(''), bytes, bytearray): # string-like
+ self.bytes = bytes(source)
+ else: # can be either file pointer or file-like
+ if type(source) in (int, long): # file pointer it is
+ ## converting file descriptor (int) stdin into file-like
+ try:
+ source = os.fdopen(source, 'rb', 16384)
+ except Exception:
+ pass
+ # let's see if source is file-like by now
+ try:
+ filelike = source.read
+ except Exception:
+ pass
+ if not filelike and not self.bytes:
+ raise TypeError("StreamFeeder's source object must be a readable "
+ "file-like, a file descriptor, or a string-like.")
+ self.source = source
+ self.readiface, self.writeiface = os.pipe()
+
+ def run(self):
+ t = self.writeiface
+ if self.bytes:
+ os.write(t, self.bytes)
+ else:
+ s = self.source
+ b = s.read(4096)
+ while b:
+ os.write(t, b)
+ b = s.read(4096)
+ os.close(t)
+
+ @property
+ def output(self):
+ return self.readiface
+
+
+class InputStreamChunker(Thread):
+ def __init__(self, source, target, buffer_size, chunk_size):
+
+ super(InputStreamChunker, self).__init__()
+
+ self.daemon = True # die die die.
+
+ self.source = source
+ self.target = target
+ self.chunk_count_max = int(buffer_size / chunk_size) + 1
+ self.chunk_size = chunk_size
+
+ self.data_added = Event()
+ self.data_added.clear()
+
+ self.keep_reading = Event()
+ self.keep_reading.set()
+
+ self.EOF = Event()
+ self.EOF.clear()
+
+ self.go = Event()
+ self.go.set()
+
+ def stop(self):
+ self.go.clear()
+ self.EOF.set()
+ try:
+ # this is not proper, but is done to force the reader thread let
+ # go of the input because, if successful, .close() will send EOF
+ # down the pipe.
+ self.source.close()
+ except:
+ pass
+
+ def run(self):
+ s = self.source
+ t = self.target
+ cs = self.chunk_size
+ ccm = self.chunk_count_max
+ kr = self.keep_reading
+ da = self.data_added
+ go = self.go
+
+ try:
+ b = s.read(cs)
+ except ValueError:
+ b = ''
+
+ while b and go.is_set():
+ if len(t) > ccm:
+ kr.clear()
+ kr.wait(2)
+ # # this only works on 2.7.x and up
+ # if not kr.wait(10):
+ # raise Exception("Timed out while waiting for input to be read.")
+ # instead we'll use this
+ if len(t) > ccm + 3:
+ raise IOError(
+ "Timed out while waiting for input from subprocess.")
+ t.append(b)
+ da.set()
+ b = s.read(cs)
+ self.EOF.set()
+ da.set() # for cases when done but there was no input.
+
+
+class BufferedGenerator(object):
+ """
+ Class behaves as a non-blocking, buffered pipe reader.
+ Reads chunks of data (through a thread)
+ from a blocking pipe, and attaches these to an array (Deque) of chunks.
+ Reading is halted in the thread when max chunks is internally buffered.
+ The .next() may operate in blocking or non-blocking fashion by yielding
+ '' if no data is ready
+ to be sent or by not returning until there is some data to send
+ When we get EOF from underlying source pipe we raise the marker to raise
+ StopIteration after the last chunk of data is yielded.
+ """
+
+ def __init__(self, source, buffer_size=65536, chunk_size=4096,
+ starting_values=[], bottomless=False):
+
+ if bottomless:
+ maxlen = int(buffer_size / chunk_size)
+ else:
+ maxlen = None
+
+ self.data = deque(starting_values, maxlen)
+ self.worker = InputStreamChunker(source, self.data, buffer_size,
+ chunk_size)
+ if starting_values:
+ self.worker.data_added.set()
+ self.worker.start()
+
+ ####################
+ # Generator's methods
+ ####################
+
+ def __iter__(self):
+ return self
+
+ def next(self):
+ while not len(self.data) and not self.worker.EOF.is_set():
+ self.worker.data_added.clear()
+ self.worker.data_added.wait(0.2)
+ if len(self.data):
+ self.worker.keep_reading.set()
+ return bytes(self.data.popleft())
+ elif self.worker.EOF.is_set():
+ raise StopIteration
+
+ def throw(self, type, value=None, traceback=None):
+ if not self.worker.EOF.is_set():
+ raise type(value)
+
+ def start(self):
+ self.worker.start()
+
+ def stop(self):
+ self.worker.stop()
+
+ def close(self):
+ try:
+ self.worker.stop()
+ self.throw(GeneratorExit)
+ except (GeneratorExit, StopIteration):
+ pass
+
+ def __del__(self):
+ self.close()
+
+ ####################
+ # Threaded reader's infrastructure.
+ ####################
+ @property
+ def input(self):
+ return self.worker.w
+
+ @property
+ def data_added_event(self):
+ return self.worker.data_added
+
+ @property
+ def data_added(self):
+ return self.worker.data_added.is_set()
+
+ @property
+ def reading_paused(self):
+ return not self.worker.keep_reading.is_set()
+
+ @property
+ def done_reading_event(self):
+ """
+ Done_reding does not mean that the iterator's buffer is empty.
+ Iterator might have done reading from underlying source, but the read
+ chunks might still be available for serving through .next() method.
+
+ :returns: An Event class instance.
+ """
+ return self.worker.EOF
+
+ @property
+ def done_reading(self):
+ """
+ Done_reding does not mean that the iterator's buffer is empty.
+ Iterator might have done reading from underlying source, but the read
+ chunks might still be available for serving through .next() method.
+
+ :returns: An Bool value.
+ """
+ return self.worker.EOF.is_set()
+
+ @property
+ def length(self):
+ """
+ returns int.
+
+ This is the lenght of the que of chunks, not the length of
+ the combined contents in those chunks.
+
+ __len__() cannot be meaningfully implemented because this
+ reader is just flying throuh a bottomless pit content and
+ can only know the lenght of what it already saw.
+
+ If __len__() on WSGI server per PEP 3333 returns a value,
+ the responce's length will be set to that. In order not to
+ confuse WSGI PEP3333 servers, we will not implement __len__
+ at all.
+ """
+ return len(self.data)
+
+ def prepend(self, x):
+ self.data.appendleft(x)
+
+ def append(self, x):
+ self.data.append(x)
+
+ def extend(self, o):
+ self.data.extend(o)
+
+ def __getitem__(self, i):
+ return self.data[i]
+
+
+class SubprocessIOChunker(object):
+ """
+ Processor class wrapping handling of subprocess IO.
+
+ .. important::
+
+ Watch out for the method `__del__` on this class. If this object
+ is deleted, it will kill the subprocess, so avoid to
+ return the `output` attribute or usage of it like in the following
+ example::
+
+ # `args` expected to run a program that produces a lot of output
+ output = ''.join(SubprocessIOChunker(
+ args, shell=False, inputstream=inputstream, env=environ).output)
+
+ # `output` will not contain all the data, because the __del__ method
+ # has already killed the subprocess in this case before all output
+ # has been consumed.
+
+
+
+ In a way, this is a "communicate()" replacement with a twist.
+
+ - We are multithreaded. Writing in and reading out, err are all sep threads.
+ - We support concurrent (in and out) stream processing.
+ - The output is not a stream. It's a queue of read string (bytes, not unicode)
+ chunks. The object behaves as an iterable. You can "for chunk in obj:" us.
+ - We are non-blocking in more respects than communicate()
+ (reading from subprocess out pauses when internal buffer is full, but
+ does not block the parent calling code. On the flip side, reading from
+ slow-yielding subprocess may block the iteration until data shows up. This
+ does not block the parallel inpipe reading occurring parallel thread.)
+
+ The purpose of the object is to allow us to wrap subprocess interactions into
+ and interable that can be passed to a WSGI server as the application's return
+ value. Because of stream-processing-ability, WSGI does not have to read ALL
+ of the subprocess's output and buffer it, before handing it to WSGI server for
+ HTTP response. Instead, the class initializer reads just a bit of the stream
+ to figure out if error ocurred or likely to occur and if not, just hands the
+ further iteration over subprocess output to the server for completion of HTTP
+ response.
+
+ The real or perceived subprocess error is trapped and raised as one of
+ EnvironmentError family of exceptions
+
+ Example usage:
+ # try:
+ # answer = SubprocessIOChunker(
+ # cmd,
+ # input,
+ # buffer_size = 65536,
+ # chunk_size = 4096
+ # )
+ # except (EnvironmentError) as e:
+ # print str(e)
+ # raise e
+ #
+ # return answer
+
+
+ """
+
+ # TODO: johbo: This is used to make sure that the open end of the PIPE
+ # is closed in the end. It would be way better to wrap this into an
+ # object, so that it is closed automatically once it is consumed or
+ # something similar.
+ _close_input_fd = None
+
+ _closed = False
+
+ def __init__(self, cmd, inputstream=None, buffer_size=65536,
+ chunk_size=4096, starting_values=[], fail_on_stderr=True,
+ fail_on_return_code=True, **kwargs):
+ """
+ Initializes SubprocessIOChunker
+
+ :param cmd: A Subprocess.Popen style "cmd". Can be string or array of strings
+ :param inputstream: (Default: None) A file-like, string, or file pointer.
+ :param buffer_size: (Default: 65536) A size of total buffer per stream in bytes.
+ :param chunk_size: (Default: 4096) A max size of a chunk. Actual chunk may be smaller.
+ :param starting_values: (Default: []) An array of strings to put in front of output que.
+ :param fail_on_stderr: (Default: True) Whether to raise an exception in
+ case something is written to stderr.
+ :param fail_on_return_code: (Default: True) Whether to raise an
+ exception if the return code is not 0.
+ """
+
+ if inputstream:
+ input_streamer = StreamFeeder(inputstream)
+ input_streamer.start()
+ inputstream = input_streamer.output
+ self._close_input_fd = inputstream
+
+ self._fail_on_stderr = fail_on_stderr
+ self._fail_on_return_code = fail_on_return_code
+
+ _shell = kwargs.get('shell', True)
+ kwargs['shell'] = _shell
+
+ _p = subprocess.Popen(cmd, bufsize=-1,
+ stdin=inputstream,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ **kwargs)
+
+ bg_out = BufferedGenerator(_p.stdout, buffer_size, chunk_size,
+ starting_values)
+ bg_err = BufferedGenerator(_p.stderr, 16000, 1, bottomless=True)
+
+ while not bg_out.done_reading and not bg_out.reading_paused and not bg_err.length:
+ # doing this until we reach either end of file, or end of buffer.
+ bg_out.data_added_event.wait(1)
+ bg_out.data_added_event.clear()
+
+ # at this point it's still ambiguous if we are done reading or just full buffer.
+ # Either way, if error (returned by ended process, or implied based on
+ # presence of stuff in stderr output) we error out.
+ # Else, we are happy.
+ _returncode = _p.poll()
+
+ if ((_returncode and fail_on_return_code) or
+ (fail_on_stderr and _returncode is None and bg_err.length)):
+ try:
+ _p.terminate()
+ except Exception:
+ pass
+ bg_out.stop()
+ bg_err.stop()
+ if fail_on_stderr:
+ err = ''.join(bg_err)
+ raise EnvironmentError(
+ "Subprocess exited due to an error:\n" + err)
+ if _returncode and fail_on_return_code:
+ err = ''.join(bg_err)
+ raise EnvironmentError(
+ "Subprocess exited with non 0 ret code:%s: stderr:%s" % (
+ _returncode, err))
+
+ self.process = _p
+ self.output = bg_out
+ self.error = bg_err
+
+ def __iter__(self):
+ return self
+
+ def next(self):
+ # Note: mikhail: We need to be sure that we are checking the return
+ # code after the stdout stream is closed. Some processes, e.g. git
+ # are doing some magic in between closing stdout and terminating the
+ # process and, as a result, we are not getting return code on "slow"
+ # systems.
+ stop_iteration = None
+ try:
+ result = self.output.next()
+ except StopIteration as e:
+ stop_iteration = e
+
+ if self.process.poll() and self._fail_on_return_code:
+ err = '%s' % ''.join(self.error)
+ raise EnvironmentError(
+ "Subprocess exited due to an error:\n" + err)
+
+ if stop_iteration:
+ raise stop_iteration
+ return result
+
+ def throw(self, type, value=None, traceback=None):
+ if self.output.length or not self.output.done_reading:
+ raise type(value)
+
+ def close(self):
+ if self._closed:
+ return
+ self._closed = True
+ try:
+ self.process.terminate()
+ except:
+ pass
+ if self._close_input_fd:
+ os.close(self._close_input_fd)
+ try:
+ self.output.close()
+ except:
+ pass
+ try:
+ self.error.close()
+ except:
+ pass
+
+ def __del__(self):
+ self.close()
diff --git a/vcsserver/svn.py b/vcsserver/svn.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/svn.py
@@ -0,0 +1,591 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+from __future__ import absolute_import
+
+from urllib2 import URLError
+import logging
+import posixpath as vcspath
+import StringIO
+import subprocess
+import urllib
+
+import svn.client
+import svn.core
+import svn.delta
+import svn.diff
+import svn.fs
+import svn.repos
+
+from vcsserver import svn_diff
+from vcsserver.base import RepoFactory
+
+
+log = logging.getLogger(__name__)
+
+
+# Set of svn compatible version flags.
+# Compare with subversion/svnadmin/svnadmin.c
+svn_compatible_versions = set([
+ 'pre-1.4-compatible',
+ 'pre-1.5-compatible',
+ 'pre-1.6-compatible',
+ 'pre-1.8-compatible',
+])
+
+
+class SubversionFactory(RepoFactory):
+
+ def _create_repo(self, wire, create, compatible_version):
+ path = svn.core.svn_path_canonicalize(wire['path'])
+ if create:
+ fs_config = {}
+ if compatible_version:
+ if compatible_version not in svn_compatible_versions:
+ raise Exception('Unknown SVN compatible version "{}"'
+ .format(compatible_version))
+ log.debug('Create SVN repo with compatible version "%s"',
+ compatible_version)
+ fs_config[compatible_version] = '1'
+ repo = svn.repos.create(path, "", "", None, fs_config)
+ else:
+ repo = svn.repos.open(path)
+ return repo
+
+ def repo(self, wire, create=False, compatible_version=None):
+ def create_new_repo():
+ return self._create_repo(wire, create, compatible_version)
+
+ return self._repo(wire, create_new_repo)
+
+
+
+NODE_TYPE_MAPPING = {
+ svn.core.svn_node_file: 'file',
+ svn.core.svn_node_dir: 'dir',
+}
+
+
+class SvnRemote(object):
+
+ def __init__(self, factory, hg_factory=None):
+ self._factory = factory
+ # TODO: Remove once we do not use internal Mercurial objects anymore
+ # for subversion
+ self._hg_factory = hg_factory
+
+ def check_url(self, url, config_items):
+ # this can throw exception if not installed, but we detect this
+ from hgsubversion import svnrepo
+
+ baseui = self._hg_factory._create_config(config_items)
+ # uuid function get's only valid UUID from proper repo, else
+ # throws exception
+ try:
+ svnrepo.svnremoterepo(baseui, url).svn.uuid
+ except:
+ log.debug("Invalid svn url: %s", url)
+ raise URLError(
+ '"%s" is not a valid Subversion source url.' % (url, ))
+ return True
+
+ def is_path_valid_repository(self, wire, path):
+ try:
+ svn.repos.open(path)
+ except svn.core.SubversionException:
+ log.debug("Invalid Subversion path %s", path)
+ return False
+ return True
+
+ def lookup(self, wire, revision):
+ if revision not in [-1, None, 'HEAD']:
+ raise NotImplementedError
+ repo = self._factory.repo(wire)
+ fs_ptr = svn.repos.fs(repo)
+ head = svn.fs.youngest_rev(fs_ptr)
+ return head
+
+ def lookup_interval(self, wire, start_ts, end_ts):
+ repo = self._factory.repo(wire)
+ fsobj = svn.repos.fs(repo)
+ start_rev = None
+ end_rev = None
+ if start_ts:
+ start_ts_svn = apr_time_t(start_ts)
+ start_rev = svn.repos.dated_revision(repo, start_ts_svn) + 1
+ else:
+ start_rev = 1
+ if end_ts:
+ end_ts_svn = apr_time_t(end_ts)
+ end_rev = svn.repos.dated_revision(repo, end_ts_svn)
+ else:
+ end_rev = svn.fs.youngest_rev(fsobj)
+ return start_rev, end_rev
+
+ def revision_properties(self, wire, revision):
+ repo = self._factory.repo(wire)
+ fs_ptr = svn.repos.fs(repo)
+ return svn.fs.revision_proplist(fs_ptr, revision)
+
+ def revision_changes(self, wire, revision):
+
+ repo = self._factory.repo(wire)
+ fsobj = svn.repos.fs(repo)
+ rev_root = svn.fs.revision_root(fsobj, revision)
+
+ editor = svn.repos.ChangeCollector(fsobj, rev_root)
+ editor_ptr, editor_baton = svn.delta.make_editor(editor)
+ base_dir = ""
+ send_deltas = False
+ svn.repos.replay2(
+ rev_root, base_dir, svn.core.SVN_INVALID_REVNUM, send_deltas,
+ editor_ptr, editor_baton, None)
+
+ added = []
+ changed = []
+ removed = []
+
+ # TODO: CHANGE_ACTION_REPLACE: Figure out where it belongs
+ for path, change in editor.changes.iteritems():
+ # TODO: Decide what to do with directory nodes. Subversion can add
+ # empty directories.
+ if change.item_kind == svn.core.svn_node_dir:
+ continue
+ if change.action == svn.repos.CHANGE_ACTION_ADD:
+ added.append(path)
+ elif change.action == svn.repos.CHANGE_ACTION_MODIFY:
+ changed.append(path)
+ elif change.action == svn.repos.CHANGE_ACTION_DELETE:
+ removed.append(path)
+ else:
+ raise NotImplementedError(
+ "Action %s not supported on path %s" % (
+ change.action, path))
+
+ changes = {
+ 'added': added,
+ 'changed': changed,
+ 'removed': removed,
+ }
+ return changes
+
+ def node_history(self, wire, path, revision, limit):
+ cross_copies = False
+ repo = self._factory.repo(wire)
+ fsobj = svn.repos.fs(repo)
+ rev_root = svn.fs.revision_root(fsobj, revision)
+
+ history_revisions = []
+ history = svn.fs.node_history(rev_root, path)
+ history = svn.fs.history_prev(history, cross_copies)
+ while history:
+ __, node_revision = svn.fs.history_location(history)
+ history_revisions.append(node_revision)
+ if limit and len(history_revisions) >= limit:
+ break
+ history = svn.fs.history_prev(history, cross_copies)
+ return history_revisions
+
+ def node_properties(self, wire, path, revision):
+ repo = self._factory.repo(wire)
+ fsobj = svn.repos.fs(repo)
+ rev_root = svn.fs.revision_root(fsobj, revision)
+ return svn.fs.node_proplist(rev_root, path)
+
+ def file_annotate(self, wire, path, revision):
+ abs_path = 'file://' + urllib.pathname2url(
+ vcspath.join(wire['path'], path))
+ file_uri = svn.core.svn_path_canonicalize(abs_path)
+
+ start_rev = svn_opt_revision_value_t(0)
+ peg_rev = svn_opt_revision_value_t(revision)
+ end_rev = peg_rev
+
+ annotations = []
+
+ def receiver(line_no, revision, author, date, line, pool):
+ annotations.append((line_no, revision, line))
+
+ # TODO: Cannot use blame5, missing typemap function in the swig code
+ try:
+ svn.client.blame2(
+ file_uri, peg_rev, start_rev, end_rev,
+ receiver, svn.client.create_context())
+ except svn.core.SubversionException as exc:
+ log.exception("Error during blame operation.")
+ raise Exception(
+ "Blame not supported or file does not exist at path %s. "
+ "Error %s." % (path, exc))
+
+ return annotations
+
+ def get_node_type(self, wire, path, rev=None):
+ repo = self._factory.repo(wire)
+ fs_ptr = svn.repos.fs(repo)
+ if rev is None:
+ rev = svn.fs.youngest_rev(fs_ptr)
+ root = svn.fs.revision_root(fs_ptr, rev)
+ node = svn.fs.check_path(root, path)
+ return NODE_TYPE_MAPPING.get(node, None)
+
+ def get_nodes(self, wire, path, revision=None):
+ repo = self._factory.repo(wire)
+ fsobj = svn.repos.fs(repo)
+ if revision is None:
+ revision = svn.fs.youngest_rev(fsobj)
+ root = svn.fs.revision_root(fsobj, revision)
+ entries = svn.fs.dir_entries(root, path)
+ result = []
+ for entry_path, entry_info in entries.iteritems():
+ result.append(
+ (entry_path, NODE_TYPE_MAPPING.get(entry_info.kind, None)))
+ return result
+
+ def get_file_content(self, wire, path, rev=None):
+ repo = self._factory.repo(wire)
+ fsobj = svn.repos.fs(repo)
+ if rev is None:
+ rev = svn.fs.youngest_revision(fsobj)
+ root = svn.fs.revision_root(fsobj, rev)
+ content = svn.core.Stream(svn.fs.file_contents(root, path))
+ return content.read()
+
+ def get_file_size(self, wire, path, revision=None):
+ repo = self._factory.repo(wire)
+ fsobj = svn.repos.fs(repo)
+ if revision is None:
+ revision = svn.fs.youngest_revision(fsobj)
+ root = svn.fs.revision_root(fsobj, revision)
+ size = svn.fs.file_length(root, path)
+ return size
+
+ def create_repository(self, wire, compatible_version=None):
+ log.info('Creating Subversion repository in path "%s"', wire['path'])
+ self._factory.repo(wire, create=True,
+ compatible_version=compatible_version)
+
+ def import_remote_repository(self, wire, src_url):
+ repo_path = wire['path']
+ if not self.is_path_valid_repository(wire, repo_path):
+ raise Exception(
+ "Path %s is not a valid Subversion repository." % repo_path)
+ # TODO: johbo: URL checks ?
+ rdump = subprocess.Popen(
+ ['svnrdump', 'dump', '--non-interactive', src_url],
+ stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ load = subprocess.Popen(
+ ['svnadmin', 'load', repo_path], stdin=rdump.stdout)
+
+ # TODO: johbo: This can be a very long operation, might be better
+ # to track some kind of status and provide an api to check if the
+ # import is done.
+ rdump.wait()
+ load.wait()
+
+ if rdump.returncode != 0:
+ errors = rdump.stderr.read()
+ log.error('svnrdump dump failed: statuscode %s: message: %s',
+ rdump.returncode, errors)
+ reason = 'UNKNOWN'
+ if 'svnrdump: E230001:' in errors:
+ reason = 'INVALID_CERTIFICATE'
+ raise Exception(
+ 'Failed to dump the remote repository from %s.' % src_url,
+ reason)
+ if load.returncode != 0:
+ raise Exception(
+ 'Failed to load the dump of remote repository from %s.' %
+ (src_url, ))
+
+ def commit(self, wire, message, author, timestamp, updated, removed):
+ assert isinstance(message, str)
+ assert isinstance(author, str)
+
+ repo = self._factory.repo(wire)
+ fsobj = svn.repos.fs(repo)
+
+ rev = svn.fs.youngest_rev(fsobj)
+ txn = svn.repos.fs_begin_txn_for_commit(repo, rev, author, message)
+ txn_root = svn.fs.txn_root(txn)
+
+ for node in updated:
+ TxnNodeProcessor(node, txn_root).update()
+ for node in removed:
+ TxnNodeProcessor(node, txn_root).remove()
+
+ commit_id = svn.repos.fs_commit_txn(repo, txn)
+
+ if timestamp:
+ apr_time = apr_time_t(timestamp)
+ ts_formatted = svn.core.svn_time_to_cstring(apr_time)
+ svn.fs.change_rev_prop(fsobj, commit_id, 'svn:date', ts_formatted)
+
+ log.debug('Committed revision "%s" to "%s".', commit_id, wire['path'])
+ return commit_id
+
+ def diff(self, wire, rev1, rev2, path1=None, path2=None,
+ ignore_whitespace=False, context=3):
+ wire.update(cache=False)
+ repo = self._factory.repo(wire)
+ diff_creator = SvnDiffer(
+ repo, rev1, path1, rev2, path2, ignore_whitespace, context)
+ return diff_creator.generate_diff()
+
+
+class SvnDiffer(object):
+ """
+ Utility to create diffs based on difflib and the Subversion api
+ """
+
+ binary_content = False
+
+ def __init__(
+ self, repo, src_rev, src_path, tgt_rev, tgt_path,
+ ignore_whitespace, context):
+ self.repo = repo
+ self.ignore_whitespace = ignore_whitespace
+ self.context = context
+
+ fsobj = svn.repos.fs(repo)
+
+ self.tgt_rev = tgt_rev
+ self.tgt_path = tgt_path or ''
+ self.tgt_root = svn.fs.revision_root(fsobj, tgt_rev)
+ self.tgt_kind = svn.fs.check_path(self.tgt_root, self.tgt_path)
+
+ self.src_rev = src_rev
+ self.src_path = src_path or self.tgt_path
+ self.src_root = svn.fs.revision_root(fsobj, src_rev)
+ self.src_kind = svn.fs.check_path(self.src_root, self.src_path)
+
+ self._validate()
+
+ def _validate(self):
+ if (self.tgt_kind != svn.core.svn_node_none and
+ self.src_kind != svn.core.svn_node_none and
+ self.src_kind != self.tgt_kind):
+ # TODO: johbo: proper error handling
+ raise Exception(
+ "Source and target are not compatible for diff generation. "
+ "Source type: %s, target type: %s" %
+ (self.src_kind, self.tgt_kind))
+
+ def generate_diff(self):
+ buf = StringIO.StringIO()
+ if self.tgt_kind == svn.core.svn_node_dir:
+ self._generate_dir_diff(buf)
+ else:
+ self._generate_file_diff(buf)
+ return buf.getvalue()
+
+ def _generate_dir_diff(self, buf):
+ editor = DiffChangeEditor()
+ editor_ptr, editor_baton = svn.delta.make_editor(editor)
+ svn.repos.dir_delta2(
+ self.src_root,
+ self.src_path,
+ '', # src_entry
+ self.tgt_root,
+ self.tgt_path,
+ editor_ptr, editor_baton,
+ authorization_callback_allow_all,
+ False, # text_deltas
+ svn.core.svn_depth_infinity, # depth
+ False, # entry_props
+ False, # ignore_ancestry
+ )
+
+ for path, __, change in sorted(editor.changes):
+ self._generate_node_diff(
+ buf, change, path, self.tgt_path, path, self.src_path)
+
+ def _generate_file_diff(self, buf):
+ change = None
+ if self.src_kind == svn.core.svn_node_none:
+ change = "add"
+ elif self.tgt_kind == svn.core.svn_node_none:
+ change = "delete"
+ tgt_base, tgt_path = vcspath.split(self.tgt_path)
+ src_base, src_path = vcspath.split(self.src_path)
+ self._generate_node_diff(
+ buf, change, tgt_path, tgt_base, src_path, src_base)
+
+ def _generate_node_diff(
+ self, buf, change, tgt_path, tgt_base, src_path, src_base):
+ tgt_full_path = vcspath.join(tgt_base, tgt_path)
+ src_full_path = vcspath.join(src_base, src_path)
+
+ self.binary_content = False
+ mime_type = self._get_mime_type(tgt_full_path)
+ if mime_type and not mime_type.startswith('text'):
+ self.binary_content = True
+ buf.write("=" * 67 + '\n')
+ buf.write("Cannot display: file marked as a binary type.\n")
+ buf.write("svn:mime-type = %s\n" % mime_type)
+ buf.write("Index: %s\n" % (tgt_path, ))
+ buf.write("=" * 67 + '\n')
+ buf.write("diff --git a/%(tgt_path)s b/%(tgt_path)s\n" % {
+ 'tgt_path': tgt_path})
+
+ if change == 'add':
+ # TODO: johbo: SVN is missing a zero here compared to git
+ buf.write("new file mode 10644\n")
+ buf.write("--- /dev/null\t(revision 0)\n")
+ src_lines = []
+ else:
+ if change == 'delete':
+ buf.write("deleted file mode 10644\n")
+ buf.write("--- a/%s\t(revision %s)\n" % (
+ src_path, self.src_rev))
+ src_lines = self._svn_readlines(self.src_root, src_full_path)
+
+ if change == 'delete':
+ buf.write("+++ /dev/null\t(revision %s)\n" % (self.tgt_rev, ))
+ tgt_lines = []
+ else:
+ buf.write("+++ b/%s\t(revision %s)\n" % (
+ tgt_path, self.tgt_rev))
+ tgt_lines = self._svn_readlines(self.tgt_root, tgt_full_path)
+
+ if not self.binary_content:
+ udiff = svn_diff.unified_diff(
+ src_lines, tgt_lines, context=self.context,
+ ignore_blank_lines=self.ignore_whitespace,
+ ignore_case=False,
+ ignore_space_changes=self.ignore_whitespace)
+ buf.writelines(udiff)
+
+ def _get_mime_type(self, path):
+ try:
+ mime_type = svn.fs.node_prop(
+ self.tgt_root, path, svn.core.SVN_PROP_MIME_TYPE)
+ except svn.core.SubversionException:
+ mime_type = svn.fs.node_prop(
+ self.src_root, path, svn.core.SVN_PROP_MIME_TYPE)
+ return mime_type
+
+ def _svn_readlines(self, fs_root, node_path):
+ if self.binary_content:
+ return []
+ node_kind = svn.fs.check_path(fs_root, node_path)
+ if node_kind not in (
+ svn.core.svn_node_file, svn.core.svn_node_symlink):
+ return []
+ content = svn.core.Stream(
+ svn.fs.file_contents(fs_root, node_path)).read()
+ return content.splitlines(True)
+
+
+class DiffChangeEditor(svn.delta.Editor):
+ """
+ Records changes between two given revisions
+ """
+
+ def __init__(self):
+ self.changes = []
+
+ def delete_entry(self, path, revision, parent_baton, pool=None):
+ self.changes.append((path, None, 'delete'))
+
+ def add_file(
+ self, path, parent_baton, copyfrom_path, copyfrom_revision,
+ file_pool=None):
+ self.changes.append((path, 'file', 'add'))
+
+ def open_file(self, path, parent_baton, base_revision, file_pool=None):
+ self.changes.append((path, 'file', 'change'))
+
+
+def authorization_callback_allow_all(root, path, pool):
+ return True
+
+
+class TxnNodeProcessor(object):
+ """
+ Utility to process the change of one node within a transaction root.
+
+ It encapsulates the knowledge of how to add, update or remove
+ a node for a given transaction root. The purpose is to support the method
+ `SvnRemote.commit`.
+ """
+
+ def __init__(self, node, txn_root):
+ assert isinstance(node['path'], str)
+
+ self.node = node
+ self.txn_root = txn_root
+
+ def update(self):
+ self._ensure_parent_dirs()
+ self._add_file_if_node_does_not_exist()
+ self._update_file_content()
+ self._update_file_properties()
+
+ def remove(self):
+ svn.fs.delete(self.txn_root, self.node['path'])
+ # TODO: Clean up directory if empty
+
+ def _ensure_parent_dirs(self):
+ curdir = vcspath.dirname(self.node['path'])
+ dirs_to_create = []
+ while not self._svn_path_exists(curdir):
+ dirs_to_create.append(curdir)
+ curdir = vcspath.dirname(curdir)
+
+ for curdir in reversed(dirs_to_create):
+ log.debug('Creating missing directory "%s"', curdir)
+ svn.fs.make_dir(self.txn_root, curdir)
+
+ def _svn_path_exists(self, path):
+ path_status = svn.fs.check_path(self.txn_root, path)
+ return path_status != svn.core.svn_node_none
+
+ def _add_file_if_node_does_not_exist(self):
+ kind = svn.fs.check_path(self.txn_root, self.node['path'])
+ if kind == svn.core.svn_node_none:
+ svn.fs.make_file(self.txn_root, self.node['path'])
+
+ def _update_file_content(self):
+ assert isinstance(self.node['content'], str)
+ handler, baton = svn.fs.apply_textdelta(
+ self.txn_root, self.node['path'], None, None)
+ svn.delta.svn_txdelta_send_string(self.node['content'], handler, baton)
+
+ def _update_file_properties(self):
+ properties = self.node.get('properties', {})
+ for key, value in properties.iteritems():
+ svn.fs.change_node_prop(
+ self.txn_root, self.node['path'], key, value)
+
+
+def apr_time_t(timestamp):
+ """
+ Convert a Python timestamp into APR timestamp type apr_time_t
+ """
+ return timestamp * 1E6
+
+
+def svn_opt_revision_value_t(num):
+ """
+ Put `num` into a `svn_opt_revision_value_t` structure.
+ """
+ value = svn.core.svn_opt_revision_value_t()
+ value.number = num
+ revision = svn.core.svn_opt_revision_t()
+ revision.kind = svn.core.svn_opt_revision_number
+ revision.value = value
+ return revision
diff --git a/vcsserver/svn_diff.py b/vcsserver/svn_diff.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/svn_diff.py
@@ -0,0 +1,207 @@
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2004-2009 Edgewall Software
+# Copyright (C) 2004-2006 Christopher Lenz
+# All rights reserved.
+#
+# This software is licensed as described in the file COPYING, which
+# you should have received as part of this distribution. The terms
+# are also available at http://trac.edgewall.org/wiki/TracLicense.
+#
+# This software consists of voluntary contributions made by many
+# individuals. For the exact contribution history, see the revision
+# history and logs, available at http://trac.edgewall.org/log/.
+#
+# Author: Christopher Lenz
+
+import difflib
+
+
+def get_filtered_hunks(fromlines, tolines, context=None,
+ ignore_blank_lines=False, ignore_case=False,
+ ignore_space_changes=False):
+ """Retrieve differences in the form of `difflib.SequenceMatcher`
+ opcodes, grouped according to the ``context`` and ``ignore_*``
+ parameters.
+
+ :param fromlines: list of lines corresponding to the old content
+ :param tolines: list of lines corresponding to the new content
+ :param ignore_blank_lines: differences about empty lines only are ignored
+ :param ignore_case: upper case / lower case only differences are ignored
+ :param ignore_space_changes: differences in amount of spaces are ignored
+ :param context: the number of "equal" lines kept for representing
+ the context of the change
+ :return: generator of grouped `difflib.SequenceMatcher` opcodes
+
+ If none of the ``ignore_*`` parameters is `True`, there's nothing
+ to filter out the results will come straight from the
+ SequenceMatcher.
+ """
+ hunks = get_hunks(fromlines, tolines, context)
+ if ignore_space_changes or ignore_case or ignore_blank_lines:
+ hunks = filter_ignorable_lines(hunks, fromlines, tolines, context,
+ ignore_blank_lines, ignore_case,
+ ignore_space_changes)
+ return hunks
+
+
+def get_hunks(fromlines, tolines, context=None):
+ """Generator yielding grouped opcodes describing differences .
+
+ See `get_filtered_hunks` for the parameter descriptions.
+ """
+ matcher = difflib.SequenceMatcher(None, fromlines, tolines)
+ if context is None:
+ return (hunk for hunk in [matcher.get_opcodes()])
+ else:
+ return matcher.get_grouped_opcodes(context)
+
+
+def filter_ignorable_lines(hunks, fromlines, tolines, context,
+ ignore_blank_lines, ignore_case,
+ ignore_space_changes):
+ """Detect line changes that should be ignored and emits them as
+ tagged as "equal", possibly joined with the preceding and/or
+ following "equal" block.
+
+ See `get_filtered_hunks` for the parameter descriptions.
+ """
+ def is_ignorable(tag, fromlines, tolines):
+ if tag == 'delete' and ignore_blank_lines:
+ if ''.join(fromlines) == '':
+ return True
+ elif tag == 'insert' and ignore_blank_lines:
+ if ''.join(tolines) == '':
+ return True
+ elif tag == 'replace' and (ignore_case or ignore_space_changes):
+ if len(fromlines) != len(tolines):
+ return False
+ def f(str):
+ if ignore_case:
+ str = str.lower()
+ if ignore_space_changes:
+ str = ' '.join(str.split())
+ return str
+ for i in range(len(fromlines)):
+ if f(fromlines[i]) != f(tolines[i]):
+ return False
+ return True
+
+ hunks = list(hunks)
+ opcodes = []
+ ignored_lines = False
+ prev = None
+ for hunk in hunks:
+ for tag, i1, i2, j1, j2 in hunk:
+ if tag == 'equal':
+ if prev:
+ prev = (tag, prev[1], i2, prev[3], j2)
+ else:
+ prev = (tag, i1, i2, j1, j2)
+ else:
+ if is_ignorable(tag, fromlines[i1:i2], tolines[j1:j2]):
+ ignored_lines = True
+ if prev:
+ prev = 'equal', prev[1], i2, prev[3], j2
+ else:
+ prev = 'equal', i1, i2, j1, j2
+ continue
+ if prev:
+ opcodes.append(prev)
+ opcodes.append((tag, i1, i2, j1, j2))
+ prev = None
+ if prev:
+ opcodes.append(prev)
+
+ if ignored_lines:
+ if context is None:
+ yield opcodes
+ else:
+ # we leave at most n lines with the tag 'equal' before and after
+ # every change
+ n = context
+ nn = n + n
+
+ group = []
+ def all_equal():
+ all(op[0] == 'equal' for op in group)
+ for idx, (tag, i1, i2, j1, j2) in enumerate(opcodes):
+ if idx == 0 and tag == 'equal': # Fixup leading unchanged block
+ i1, j1 = max(i1, i2 - n), max(j1, j2 - n)
+ elif tag == 'equal' and i2 - i1 > nn:
+ group.append((tag, i1, min(i2, i1 + n), j1,
+ min(j2, j1 + n)))
+ if not all_equal():
+ yield group
+ group = []
+ i1, j1 = max(i1, i2 - n), max(j1, j2 - n)
+ group.append((tag, i1, i2, j1, j2))
+
+ if group and not (len(group) == 1 and group[0][0] == 'equal'):
+ if group[-1][0] == 'equal': # Fixup trailing unchanged block
+ tag, i1, i2, j1, j2 = group[-1]
+ group[-1] = tag, i1, min(i2, i1 + n), j1, min(j2, j1 + n)
+ if not all_equal():
+ yield group
+ else:
+ for hunk in hunks:
+ yield hunk
+
+
+NO_NEWLINE_AT_END = '\\ No newline at end of file'
+
+
+def unified_diff(fromlines, tolines, context=None, ignore_blank_lines=0,
+ ignore_case=0, ignore_space_changes=0, lineterm='\n'):
+ """
+ Generator producing lines corresponding to a textual diff.
+
+ See `get_filtered_hunks` for the parameter descriptions.
+ """
+ # TODO: johbo: Check if this can be nicely integrated into the matching
+ if ignore_space_changes:
+ fromlines = [l.strip() for l in fromlines]
+ tolines = [l.strip() for l in tolines]
+
+ for group in get_filtered_hunks(fromlines, tolines, context,
+ ignore_blank_lines, ignore_case,
+ ignore_space_changes):
+ i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4]
+ if i1 == 0 and i2 == 0:
+ i1, i2 = -1, -1 # support for Add changes
+ if j1 == 0 and j2 == 0:
+ j1, j2 = -1, -1 # support for Delete changes
+ yield '@@ -%s +%s @@%s' % (
+ _hunk_range(i1 + 1, i2 - i1),
+ _hunk_range(j1 + 1, j2 - j1),
+ lineterm)
+ for tag, i1, i2, j1, j2 in group:
+ if tag == 'equal':
+ for line in fromlines[i1:i2]:
+ if not line.endswith(lineterm):
+ yield ' ' + line + lineterm
+ yield NO_NEWLINE_AT_END + lineterm
+ else:
+ yield ' ' + line
+ else:
+ if tag in ('replace', 'delete'):
+ for line in fromlines[i1:i2]:
+ if not line.endswith(lineterm):
+ yield '-' + line + lineterm
+ yield NO_NEWLINE_AT_END + lineterm
+ else:
+ yield '-' + line
+ if tag in ('replace', 'insert'):
+ for line in tolines[j1:j2]:
+ if not line.endswith(lineterm):
+ yield '+' + line + lineterm
+ yield NO_NEWLINE_AT_END + lineterm
+ else:
+ yield '+' + line
+
+
+def _hunk_range(start, length):
+ if length != 1:
+ return '%d,%d' % (start, length)
+ else:
+ return '%d' % (start, )
diff --git a/vcsserver/utils.py b/vcsserver/utils.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/utils.py
@@ -0,0 +1,57 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+
+
+# TODO: johbo: That's a copy from rhodecode
+def safe_str(unicode_, to_encoding=['utf8']):
+ """
+ safe str function. Does few trick to turn unicode_ into string
+
+ In case of UnicodeEncodeError, we try to return it with encoding detected
+ by chardet library if it fails fallback to string with errors replaced
+
+ :param unicode_: unicode to encode
+ :rtype: str
+ :returns: str object
+ """
+
+ # if it's not basestr cast to str
+ if not isinstance(unicode_, basestring):
+ return str(unicode_)
+
+ if isinstance(unicode_, str):
+ return unicode_
+
+ if not isinstance(to_encoding, (list, tuple)):
+ to_encoding = [to_encoding]
+
+ for enc in to_encoding:
+ try:
+ return unicode_.encode(enc)
+ except UnicodeEncodeError:
+ pass
+
+ try:
+ import chardet
+ encoding = chardet.detect(unicode_)['encoding']
+ if encoding is None:
+ raise UnicodeEncodeError()
+
+ return unicode_.encode(encoding)
+ except (ImportError, UnicodeEncodeError):
+ return unicode_.encode(to_encoding[0], 'replace')
diff --git a/vcsserver/wsgi_app_caller.py b/vcsserver/wsgi_app_caller.py
new file mode 100644
--- /dev/null
+++ b/vcsserver/wsgi_app_caller.py
@@ -0,0 +1,116 @@
+# RhodeCode VCSServer provides access to different vcs backends via network.
+# Copyright (C) 2014-2016 RodeCode GmbH
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+"""Extract the responses of a WSGI app."""
+
+__all__ = ('WSGIAppCaller',)
+
+import io
+import logging
+import os
+
+
+log = logging.getLogger(__name__)
+
+DEV_NULL = open(os.devnull)
+
+
+def _complete_environ(environ, input_data):
+ """Update the missing wsgi.* variables of a WSGI environment.
+
+ :param environ: WSGI environment to update
+ :type environ: dict
+ :param input_data: data to be read by the app
+ :type input_data: str
+ """
+ environ.update({
+ 'wsgi.version': (1, 0),
+ 'wsgi.url_scheme': 'http',
+ 'wsgi.multithread': True,
+ 'wsgi.multiprocess': True,
+ 'wsgi.run_once': False,
+ 'wsgi.input': io.BytesIO(input_data),
+ 'wsgi.errors': DEV_NULL,
+ })
+
+
+# pylint: disable=too-few-public-methods
+class _StartResponse(object):
+ """Save the arguments of a start_response call."""
+
+ __slots__ = ['status', 'headers', 'content']
+
+ def __init__(self):
+ self.status = None
+ self.headers = None
+ self.content = []
+
+ def __call__(self, status, headers, exc_info=None):
+ # TODO(skreft): do something meaningful with the exc_info
+ exc_info = None # avoid dangling circular reference
+ self.status = status
+ self.headers = headers
+
+ return self.write
+
+ def write(self, content):
+ """Write method returning when calling this object.
+
+ All the data written is then available in content.
+ """
+ self.content.append(content)
+
+
+class WSGIAppCaller(object):
+ """Calls a WSGI app."""
+
+ def __init__(self, app):
+ """
+ :param app: WSGI app to call
+ """
+ self.app = app
+
+ def handle(self, environ, input_data):
+ """Process a request with the WSGI app.
+
+ The returned data of the app is fully consumed into a list.
+
+ :param environ: WSGI environment to update
+ :type environ: dict
+ :param input_data: data to be read by the app
+ :type input_data: str
+
+ :returns: a tuple with the contents, status and headers
+ :rtype: (list, str, list<(str, str)>)
+ """
+ _complete_environ(environ, input_data)
+ start_response = _StartResponse()
+ log.debug("Calling wrapped WSGI application")
+ responses = self.app(environ, start_response)
+ responses_list = list(responses)
+ existing_responses = start_response.content
+ if existing_responses:
+ log.debug(
+ "Adding returned response to response written via write()")
+ existing_responses.extend(responses_list)
+ responses_list = existing_responses
+ if hasattr(responses, 'close'):
+ log.debug("Closing iterator from WSGI application")
+ responses.close()
+
+ log.debug("Handling of WSGI request done, returning response")
+ return responses_list, start_response.status, start_response.headers