Why package deps won't work (Was: Re: Converting plain JARs to Java modules)
David M. Lloyd
david.lloyd at redhat.com
Tue Nov 15 14:56:07 PST 2011
On 11/15/2011 04:18 PM, Eric Johnson wrote:
> On 11/15/11 6:06 PM, David M. Lloyd wrote:
>> As with the equivalent situation on an operating system distribution,
>> the long term goal is always to phase out old versions and introduce
>> new versions over a period of time which is managed by a social
>> process. This type of setup has been proven to work in a number of
>> operating system distributions to date, and seems to work well for our
>> case as well.
>
> The logical conclusion from this, though is that with future JDKs, each
> Linux distributor will have their own modularization of the JDK, so as
> to fit better with their package system, or their version definition, or
> the legacy accommodations that they've made. Wouldn't we all be better
> off if the "official" modularization of the JDK could mostly be used
> as-is, and only tweaked around the edges by distributions?
Sure, theoretically. This hasn't happened for any other language
though, and I don't think it's realistic to expect that Java will be any
different. For most languages there are OS-level packages for each
module that the OS distributes but there is by no means any guarantee
that a given OS would ship an entire module environment for a language.
IIRC though one thing you could do with Perl in, say, Fedora is use CPAN
locally to add your own collection of modules over the operating system
module library. This type of overlay can be an effective solution.
> The situation we ran into as an actual use case with OSGi, and the point
> I think Peter was making, is that if you use M2P dependencies, then you,
> as the modularization distributor, can make changes to the core modules
> without that fanning out to the entire dependency tree, because now you
> can move package X to module Y from original module O, and no dependent
> modules need to change.
>
> If you just use M2M dependencies, then when you try to remodularize for
> whatever reason, and do the same package move, then you have unpleasant
> options. Either you make a bad compromise for compatibility (O now
> depends on Y), or you have to update all the dependent modules. Yuck, or
> yuck.
If you're, say, splitting a module or reorganizing and you want to avoid
updating dependents, you always have the option of adding compatibility
aggregate modules and so on. But this type of refactor isn't too common
in a static module repository because libraries are generally built for
long-term backwards compatibility which is tough to maintain when you're
moving packages around. So while m2p might solve moving packages more
elegantly (assuming single versions of all packages of course), it's not
really a common case in an SE situation and not a requirement (it's a
nice-to-have at best).
>> Like the operating system case, I believe that run-time dependencies
>> are the domain of the module distributor. The dependencies are often
>> slightly varied depending on who is installing the library, where, and
>> why. I think it's better to maintain a simple system which is flexible
>> enough to be customized by distribution than to try to come up with a
>> centralized or globalized dependency scheme. Hard realities like
>> licensing, implementation selection, and defective software will make
>> a pure globalized scheme unworkable. Take the simple division of
>> philosophy between Debian and Ubuntu or Fedora, where licensing
>> ideology makes a substantial difference in what is distributed and how.
>
> Hmmm. "runtime dependencies are the domain of the module distributor".
> Maybe we're just arguing terminology?
>
> I say runtime dependencies are sometimes the domain of the *deployer*,
> and to me, that's not the same as a module distributor (which I
> interpret as my Linux provider, or enterprise software provider).
Yeah definitely. There is an overlap for sure. I think that in the OS
distribution case for example there will always be a case where the user
wants to develop against either a wholly local repository, a hybrid of
locally installed and globally installed modules, or purely against the
global install. Also there is a case where a user wants to develop
against a hybrid of their own local install which is overlaying a vendor
distribution of modules (like an application server or a large
proprietary application).
Overall I think we definitely need the ability to support hybrid module
root configurations. And you shouldn't have to be a wizard to be able
to assemble your own module distribution.
> Example: If my application requires an implementation of the
> javax.xml.parsers API, as an application developer, there are at least
> three possibilities: (a) determined by the OS distribution, (b)
> determined by other components of the system that feel a need to
> constrain this, or (c) specifically determined by my launch
> configuration. I might, as an example of the latter, at least from my
> IDE, wish to run test suites using a variety of different
> implementations of the API.
>
> If I start recording M2M dependencies, I might be over-constrained, and
> complete stuck if some existing module happens to include both
> javax.xml.parsers, as well as an implementation of said APIs (as they
> used to do!).
If you use M2P dependencies for this then you may not even be able to
launch without constructing or managing package indexes or creating a
number of extra package files, or running tooling to rebuild these.
One advantage to the M2M system is that you can overlay module
repositories fairly easily by simply loading modules from each one in
sequence.
--
- DML
More information about the jigsaw-dev
mailing list