Ensuring SciJava is backwards compatible without paralyzing the code

Tags: #<Tag:0x00007fd6a1934b10> #<Tag:0x00007fd6a19343e0> #<Tag:0x00007fd6a1934098>


As some of you already know, I am embarking on a major update of the SciJava Common core library. It is being heavily modularized, to keep all layers and concerns as encapsulated as possible. This needs to happen to facilitate major improvements and enhancements to components like ImageJ Ops.

I have been grappling with how to do this major restructuring while retaining backwards compatibility for the SciJava plugins that are already in the wild. The ImageJ community has grown to expect that backwards compatibility of all plugins will be maintained forever, which is a very tall order.

While walking to work today, I had a Eureka! moment and think I have a solution that will meet all requirements. There are several related problems:

  • ImageJ update sites are not synced with Maven repositories.
  • The ImageJ and Fiji distributions are relatively large, with lots of JARs and a complex launcher.
  • People have plugins written with proper Maven dependencies, but built on all sorts of different component versions.
  • We want the freedom to update components and break past APIs, so that the future will be better, minimizing the impedance of past design decisions.

I believe all of these problems can be solved by the same piece of logic: scijava-grab. It can be made more robust, fully standalone without dependencies, and into the primary delivery vehicle for Java libraries in the end-user ImageJ application and elsewhere.

The plan has already been to use it for Jupyter Notebooks, for complete reproducibility on the Java side. But in general, why not always inspect a plugin’s dependencies (embedded in the JAR in the POM) and dynamically grab them and load them and use them when running that plugin? This would be OSGi-ish, but not as complex as OSGi.

The grabbed artifacts are stored into ~/.scijava/repository or similar, and shared between all ImageJ installations, and your Jupyter notebooks, and any other stuff that works via this mechanism.
There are many advantages to this: e.g., you can have lots of different Fijis without it taking a ton of disk like it does now.

My biggest concern with this scheme is that the output of one plugin producing e.g. a v0.24.x net.imagej.Dataset may not feed into another plugin which expects a v1.5.x net.imagej.Dataset. This is OSGi’s flavor of “dependency hell.” I know that this problem has been solved in the Java enterprise community, but off the top of my head cannot recall the mechanism(s) for dealing with this. I recall reading about something that intelligently coerces an object of one class into an object of that same class at a different version, on a best effort basis of course. Hence, this scheme is not a panacea for all version skew: we still need to have discipline in the core libraries, using interface-driven design and deprecating old API rather than outright removing it.

Another concern is that it is non-trivial to extract the complete list of dependency GAVs from the embedded POM. It would probably be prudent to change the SciJava indexer to embed this list of GAVs directly into another metadata file at build time, rather than trying to reason about the dependency tree at runtime where the Maven tooling is not present or not matching what was used at build time. We obviously haven’t been embedding such a manifest at build time in the past, so may still end up needing to reason about components already in the wild somehow.

If these obstacles can be overcome, then I think we’ll have a system where:

  • The base ImageJ download is much slimmer. (But needs to bootstrap the first time it is run.)
  • The contents and complexity of update sites is much reduced. The ImageJ core update site might be able to disappear completely.
  • ImageJ commands work more reproducibly, even as ImageJ itself continues to evolve and change.

This idea may be exchanging “classpath hell” for another form of “dependency hell” but I don’t see a better way forward. Other options I have considered include:

  • Break backwards compatibility. Tell people their old plugins will stop working in new versions of ImageJ. Possibly dead-end the update sites and tell everyone to download fresh copies of the next-major-version of ImageJ. And maybe repeat this process every few years. Many people are likely to be disappointed and frustrated by this decision.
  • Change the package prefix every time we make major backwards-incompatible changes. The new SciJava Common prefix could be org.scijava.v3 instead of just org.scijava. And then if/when there is a SciJava 4, we can use org.scijava.v4, etc. Then all major versions of SciJava can coexist on the same system classpath. Downsides of this scheme: the package prefixes might be confusing; we still don’t get full reproducibility since there is version slippage over the minor version increments; and plugins developed at different major versions are guaranteed not to work together unless we create explicit Converter plugins for them (yuck).
  • Make a best-effort to preserve existing API, deprecating old method signatures while introducing new ones. The downside is that it ties our hands w.r.t. the architecture, since we continually have to consider and test with the whole universe of existing plugins. For example, when a class moves to a new package, we could leave the old class in place with the old API and deprecate it. We could even place it into a dedicated scijava-compat component, to keep it tidily out of the way of new projects (like we did for imagej-deprecated). On the surface this seems really nice. But then we need to handle two cases—plugins using the old class, and plugins using the new class—and somehow behave sensibly across the various scenarios. Suppose the org.scijava.plugin.Parameter annotation moves to a new package (hint: it will :wink:). Some commands become annotated with the new @Parameter and some use the old one. So when scanning for parameters via reflection, we need to look for both. And this backwards compatibility consideration can never be removed from the code, ever. Speaking from experience, the pace at which new development can occur is 10x slower or worse when we need to continually consider such conundrums.

Comments, especially from other software architects, are very welcome. If anyone sees an easier way forward that solves the above, great. Otherwise, I’ll do some exploration in this class-loader-driven direction over the next couple of months.