Key takeaways
|
This is the second article of the "Java 9, OSGi and the Future of Modularity” series. Please see also Part 1: Java 9, OSGi and the Future of Modularity.
This continues our deep-dive into OSGI and the Java Platform Module System (JPMS), scheduled to be released as part of Java 9. In part one we compared the two module systems at a high level, describing how each tackles isolation between modules. We drilled into how dependencies work, and we looked at some of the issues around reflection. In this second part we discuss versioning, dynamic module loading and the potential for future interoperability of OSGi and JPMS.
Versioning
Versioning is a critical aspect of all software delivery. Both APIs and implementations change, so whenever we depend on them we are implicitly depending on them as they existed at a certain point in time. Any module system must be able to deal with this reality... usually by supporting explicit versions on both artifacts and on the dependencies which refer to them.
However not all changes are equally disruptive. If we build and test our software using version 1.0.0 of some module then it seems likely that our software will continue to work when we deploy version 1.0.1 or 1.0.5... whereas it's very unlikely that our software will work if deployed with version 2.0.0 or version 5.2.10 of the dependency. This suggest that a module system needs to understand and support compatibility ranges.
OSGi has alway supported both of these concepts. Bundles are versioned, as are exported packages. Imports of packages refer to ranges, which are normally expressed with an inclusive lower bound and an exclusive upper bound, e.g.: "[1.0.0, 2.0.0)", which means every version from 1.0.0 up to but not including 2.0.0. OSGi uses semantic versioning, fully aligned with the popular semantic versioning specification (although OSGi predates that document). Loosely, the first segment is the major version indicating breaking changes in functionality and APIs, the second is a minor version indicating non-breaking functional enhancements. The third segment indicates patches to existing functionality.
OSGi developers do not need to reason about or explicitly state these version ranges. Just like the imports themselves, version ranges are generated automatically at build time by analysing the nature of the dependency. For example if we merely use an API package as a consumer, then we get a wide range like "[1.0.0, 2.0.0)", that includes all of the minor and micro releases. But if we are implementing a service interface as a provider then we must import the package with a narrow range such as "[1.0.0, 1.1.0)", meaning every version from 1.0.0 up to but not including 1.1.0. The difference here is that a provider that supports version 1.0.0 functionality would not support 1.1.0 because the increment of the minor segment implies new functionality that the provider cannot automatically provide. On the other hand, a consumer can easily use 1.1.0 or 1.2.0 etc, because it just ignores the new functionality.
In addition to generated ranges on imports, OSGi build tooling (bnd) assists with getting the version of an exported package correct. The version is a property of a package, and it can be written directly into the package using a @Version
annotation on package-info.java.
It's important to update this version whenever the content of the package changes: for example, if we add a method to a service interface we need to increment the version from 1.0.0 to 1.1.0. The build tooling checks that the version accurately reflects the nature of the change that was made. So for example, it will break the build if we added that method but forgot to change the version at all, or if we made too small a change such as incrementing only to 1.0.1.
Finally, OSGi has the flexibility to allow multiple versions of a module to be deployed simultaneously in a single application. This situation can arise if some of our dependencies have transitive dependencies to different version of a common library like slf4j or Guava. There are some limitations – we cannot directly import multiple versions of a package within a single module – but it's invaluable to have the capability for those times when it is really needed.
All this means that OSGi has a comprehensive approach that allows us to build modules in separate teams and organisations, and later assemble modules into an application. The tools give us high confidence that the selected set of modules will actually work together.
In contrast, the JPMS has almost no support for any form of versioning.
There is no way to indicate the version of a module in module-info.java, (there is a Version attribute in the compiled module-info.class file, but it does not come from the Java source, and it's unclear how this will be used in practice.) Dependencies are also not versioned: JPMS modules can only require other modules by name but not by version, and certainly not by version range. These features will have to be added by external tools, but this effort is hampered because the module-info.java source file is not extensible and Java annotations cannot be used in it.
The JPMS requirements state that the selection of compatible versions to be used at runtime is out of scope. This means other tools will have to do this job – but they cannot do it without suitable metadata. It would have been natural to store version metadata in the same descriptor as the basic module metadata, but this will not be possible.
Also, as we have already mentioned, under JPMS it is forbidden to have multiple versions of the same module present simultaneously. Additionally It is forbidden for more than one module to export the same package, or even to have overlapping private packages. Therefore whatever tooling is used to construct valid sets of modules will have to find a solution to conflicting transitive dependencies. In many cases the "solution" will simply be that certain modules cannot be usable in combination with other modules.
Dynamics
As a happy side-effect of OSGi's class loader-based implementation of isolation, it is possible to support dynamic loading, updating, and unloading of modules at runtime. This may not seem important in an enterprise context, indeed most enterprise deployments of OSGi do not use dynamic updates. Nothing in OSGi says that you must perform dynamic updates!
But dynamic updates are invaluable in other contexts such as IoT. It is a massive headache to update software that is deployed across thousands or even millions of devices, over slow or intermittent networks. OSGi is one of very few technologies on any platform that has direct support for on-the-fly updates using the absolute minimum amount of data: we need only send the modules that have actually changed.
Initially in 2000 one of the major reasons for telecom operators to use OSGi on home gateways and routers to build smart home solutions was to be able to manage software without doing a firmware update. Firmware updates were not appealing for a number of reasons: Downloads - firmware updates typically required the download of megabytes of software to potentially millions of devices; Firmware is device specific, so you may end up with lots of different updates to create and manage the deployment of; Testing - firmware updates require extensive, time consuming and expensive, stress testing, and this to be performed for each device, every time. OSGi simplifies this process significantly; updates can be applied in modules, and installed ‘on-the-fly’ to running gateways and routers, with no reboot required; the same module can be used on all devices (it is normally abstracted from the underlying device hardware) and importantly, unit testing can be performed once on a much smaller set of software saving huge amounts of time, effort and money. One concrete example is Qivicon, an industry alliance that has been founded by Deutsche Telekom. Qivicon provides home gateways that include an OSGi based software stack, a backend infrastructure, tooling for app developers as well as maintenance and support. The use of OSGi to underpin the ecosystem enables Qivicon partners to bring their own smart home products to market much faster.
Qivicon partners constantly integrate new devices and develop new innovative value added services. This requires sophisticated device management and software provisioning capabilities to ensure dependency and compatibility management of the software components for specific device platforms. These capabilities have been standardized in OSGi using existing industry standards such as TR-069 and OMA-DM.
Furthermore, dynamic behaviour is relevant beyond software updates.
The OSGi Service Registry is inherently dynamic. Services can come and go, and components that bind to services are notified in real time. Services allow the constantly changing state of the real world to be represented and reported. This is relevant even in the comparatively sedate world of enterprise applications. For example, OSGi services can represent the availability of external data feeds, or the IP addresses of a load-balanced REST service, or even the opening hours of a financial market. Each component that consumes a service can decide how to react when that service is unavailable: it can keep going, or unregister its own provided service. Thus changes in low-level state are reliably propagated to wherever they have impact.
Interoperability and Futures
JPMS will be released in mainstream Java next year with the release of Java 9. There exist a substantial number of applications written in OSGi, with more being written all the time. Are they safe, or must they be rewritten for the new JPMS module system?
The first thing to say is that OSGi applications will certainly work unchanged on Java 9, so long as they do not use unsupported, internal Java APIs. This is the same general advice for all Java code. OSGi uses only supported Java APIs, and Oracle has given a strong commitment that Java 9 will not break such applications. Any problems you do have with Java 9 are likely to come from libraries that use internal JDK types, which are no longer accessible in Java 9 except via special configuration flags. OSGi users are going to be much better prepared for this change because their modules have imports explicitly listed. An OSGi-based application is much clearer about the scope of its dependence on the platform, compared with a normal Java application that simply piles JARs on the classpath.
In this most basic compatibility mode, the OSGi Framework and bundles will exist entirely within the “unnamed” module of JPMS. OSGi will continue to offer all its existing isolation features, along with its powerful service registry and dynamic loading. Your investments in OSGi are safe, and OSGi remains a great choice for new projects.
But hopefully we can do even better than that. When OSGi is running on a modular Java 9 platform, we should be able to take advantage of the modules in the platform. For example, it should be possible for an OSGi bundle to declare the set of platform modules it depends on – that is, we should be able to depend directly from an OSGi bundle to a JPMS module. The OSGi Framework should respect those dependencies at runtime, and the tools should be able to prepare runtimes based on them.
Things already look quite good here. In a blog post dated November 2015, I described a proof-of-concept that I built to demonstrate OSGi running on JPMS. I detailed how OSGi bundles could declare dependencies on specific JPMS modules in the base platform. I showed how OSGi would reject a bundle if it depended on a JPMS module that was not in the platform. I did not prototype the tool for assembling runtimes, but all of the pieces already exist to start creating such a tool.
Figure 3 shows how the interoperability could work in the future. We can see that Bundle A imports the package javax.activation
, which under JPMS is exported by the java.activation
module. The interop layer will know that the platform contains that module, allowing OSGi to resolve it. Bundle A doesn’t need to change at all when moving to Java 9. Bundle B uses the java.net.http package
, from the java.httpclient
JPMS module, but this cannot be expressed as an OSGi Import-Package because it begins with “java
.” (note that all bundles and modules implicitly depend on java.base
).
Therefore we propose a new OSGi header called “Require-PlatformModule” that expresses a requirement for a JPMS module. This would enable the OSGi framework to “fail fast” on Bundle B if the platform does not contain the java.httpclient module. It would also enable tools to construct a complete runtime for an application, using the minimal set of both JPMS modules and OSGi bundles.
Again, it’s important to note that this work was an unofficial proof-of-concept, and the actual shape of OSGi’s interoperability with JPMS will be determined by the specification process.
Figure 3: OSGi – JPMS Interoperability Proof-of-Concept
Conclusion
JPMS, through the Jigsaw prototype project, has done a very good job of modularising the Java platform. As a result of this work it will be possible to construct very small runtimes containing only the parts of the Java platform that are required for a specific workload.
However as a specification for application modularity, JPMS suffers from some serious shortcomings. The lack of any support for versioning is a shocking omission, and it's difficult to see how it will be practically possible to build applications without a parallel system of metadata provided by external tools. The whole-module dependency declarations will pull in more transitive dependencies than they need to, which will undercut gains that are made from having a smaller platform. The inaccessibility by reflection to non-exported packages will make it needlessly hard to use existing frameworks from the Java ecosystems.
These design decisions are probably the right ones for the JDK itself: they increase the robustness and security of the platform, without breaking backwards compatibility for all existing Java applications. But they make trade-offs that are, on balance, a poor choice for application modules.
So the future for OSGi seems bright: by combining OSGi with a trimmed-down, modular Java platform, we get the best of both worlds. With 16 years of experience, OSGi has encountered and solved problems that JPMS has not yet even considered. The ecosystem for OSGi tools and runtimes is wide and deep. And it is guaranteed future-proof with support from a long standing, proven, independent standards body. What are you waiting for?
About the Author
Neil Bartlett is a principal engineer, consultant, trainer and developer with Paremus.. Neil has been working with Java since 1998 and OSGi since 2003 and specialises in Java, OSGi, Eclipse and Haskell. He is the founder of the Bndtools eclipse plugin, the leading IDE for OSGi. He can often be found on twitter (@nbartlett) tweeting on all things #OSGi and answering questions on Stack Overflow where he is the only holder of a gold OSGi badge. Neil regularly contributes to the Paremus Blogs and is also writing his second book "Effective OSGi” which will show developers how to quickly accelerate their productivity with OSGi using the latest techniques and tools.
Kai Hackbarth is an Evangelist at Bosch Software Innovations. He has been deeply involved in the technical standardization activities of the OSGi Alliance for more than 15 years. Kai is a member of the OSGi Alliance Board of Directors and has been co-chair of the OSGi Residential Expert Group since 2008. Kai is coordinating several research project activities in various IoT domains. His key focus areas are smart homes, automotive, and the Internet of Things in general, where he actively supports the current developments and strategic positioning of the product portfolio.