New candidate JEP: 357: Migrate from Mercurial to Git

David Lloyd david.lloyd at redhat.com
Tue Jul 16 15:58:42 UTC 2019


On Tue, Jul 16, 2019 at 10:48 AM Aleksey Shipilev <shade at redhat.com> wrote:
>
> On 7/16/19 5:41 PM, David Lloyd wrote:
> > FWIW you can "seed" your clone if you have another one handy by
> > cloning the local directory e.g. "git clone -o local ../other-checkout
> > new-checkout".  Then fetching the upstream will only fetch objects
> > that weren't in "other-checkout".  I do this frequently for large
> > repositories, which is especially useful in laptop-in-coffee-shop
> > kinds of situations.
>
> Yes, but we are comparing "fresh clone from master server" use case here. I carry around tarballs of
> Mercurial workspaces from https://builds.shipilev.net/workspaces/, and it is also blazingly fast.
> But that misses the point I want to make.

Generally with Git there is no value in a fresh clone from master
unless you well and truly have no other checkouts anywhere (or the
project is small, which is not the case here).  It would be very
unusual for a developer to be *required* to clone from upstream more
than one time unless they were developing from a new, empty system and
didn't have reasonable access to any other system that had a clone.  I
am trying to recall a time when that was the case for me and failing:
the first thing I do before I travel is to re-fetch everything I'm
likely to be working on.  I don't think this use case is particularly
impactful unless we're talking orders-of-magnitude difference, and
even in that case it would be pretty mitigable without straying from
completely normal and idiomatic Git practices.  Normally for a large
project, you would clone something local, then fetch upstream to get
the most recent commits (which would be a tiny fraction of the overall
bandwidth).

Having tarballs of workspaces is not something a Git user would
normally need do for any reason, regardless of the size of the
project.

-- 
- DML


More information about the discuss mailing list