JVM

What does Coroutines’ creator think of Virtual Threads? – JVM Weekly vol. 132

In today’s edition, we’re still somewhat on the KotlinConf theme, but we’ll go beyond that event by presenting changes in build systems space, as well as a digest of a very interesting informative JEP, touching on the topic of Java code encapsulation.

Article cover

1. What does Roman Elizarov – the creator of Coroutines – thinks about Virtual Threads?

A week ago, I ended the editions with a cliffhanger, promising that today we would cover Roman Elizarov’s presentation, which kicked off the second day of KotlinConf, and talked about what Project Loom would change in the world of Coroutines (spoiler: not so much, but about that in a moment). The recording of the presentation isn’t online yet (though it will probably appear in a few days), but as I had a chance to watch it, I’ll share what we learned from it.

A warm welcome!
All pleasure is mine!

Right off the bat, I’ll mention that most of what was presented isn’t entirely new knowledge. However, for the first time, we have information straight from the source – Roman Elizarov is not only the current lead for Kotlin but also the architect of Coroutines itself. His viewpoint can be summarized as follows – Project Loom and Kotlin Coroutines deal with the challenges of concurrent programming, but they aim at different aspects and, therefore, have different implementations. Project Loom focuses on expanding the existing thread APIs and aims to enable optimal scaling for server applications written in the thread-per-request style (a common pattern in the Java ecosystem). It introduces virtual threads as an extension of the APIs for “physical” threads, allowing for the switching of physical threads to virtual threads with minimal changes to the code. This is particularly useful for existing code that will be maintained for years, possibly modernized with virtual threads.

Kotlin Coroutines, on the other hand, work completely differently and at a different layer (by generating additional code in the background). This allows them to focus on providing the most convenient asynchronous APIs and granular concurrency – both for server applications and any other part of the project where executing a code fragment in a new thread makes sense. Considering Kotlin’s importance in the world of Android, it’s equally important, for example, to have good support for UI frameworks. Kotlin Coroutines provide a lightweight way of managing concurrency through Structured Concurrency, which helps maintain the parent-child hierarchy and solves the surprisingly difficult problem of canceling an already-started asynchronous workflow. Java also has Structured Concurrency, but Elizarov points out its limitations – it was created as a new feature for an already existing concurrency API, rather than something added at the design level.

From the presentation, the above picture more or less clarifies

But lest it be said that coroutines benefit nothing from the emergence of virtual threads – one of their weaknesses is the challenges of combining the asynchronous world with the “blocking” one, which can still create physical bottlenecks, forcing the use of resource-intensive coroutine dispatchers like Dispatcher.IO, running on classic blocking threads. Project Loom can help avoid blocking and wasting physical resources by running such interfaces within Virtual Threads, which were created to support them – with compatibility with legacy code in mind. Loom’s Virtual Threads are thus more resource-efficient than “physical” threads (as hard as that is to cross my throat), and since Coroutines were by design created to use any concurrency engine, they will be able to take advantage of the new features within the JVM as well.

Of course, the whole thing is probably a bit trivialized – as we didn’t have a panel discussion between Ron Pressler and Roman Elizarov (gentlemen, do something like that, Avengers Endgame would have been surpassed as the crossover of the decade), but an argument by THE CREATOR of Coroutines at KOTLINCONF. Still, I highly recommend hunting for a video, as soon as KotlinConf decides to publish a transcript of the presentation – I’ll be sure to let you know about one.

And if you want a little extra context – I recommend this episode of “Talking Kotlin”
Discover more IT content selected for you
In Vived, you will find articles handpicked by devs. Download the app and read the good stuff!

phone newsletter image

2. Changes in the world of build systems – Gradle and bld

However, this is not the end of the continuation of threads from the previous edition. Last week we talked about Gradle, but rather from the perspective of news for Kotlin – so today it is time to return to 8.1. This is because the release brings a lot of interesting new features that I didn’t have a chance to mention a week ago.

For example, the configuration caching feature has been stabilized, which allows for buffering its result and reusing it in subsequent builds, reducing their time. The cache now fully supports dependency verification, local repositories, and has extended compatibility with basic plugins. And while we’re on the subject of configuration, Gradle has also added encryption to its cache to protect sensitive data.

The JVM plugin now supports building projects with Java 20, and the Codenarc plugin – used for static analysis – has been optimized for parallelized operation. As for performance gains, Gradle itself got better memory management. In addition, new Dataflow actions – an alternative mechanism to the well-known shuffle – have been introduced to replace the existing buildFinished listener.

A week ago we wrote about the fact that Gradle for Android will use Kotlin by default. However, it turned out that the platform has gone a step further and decided to introduce build.kts as the default format on all platforms. Somewhat related to this fact, Kotlin DSL has seen improvements in several areas, including simple assignment of properties in Kotlin DSL scripts and some improvements when it comes to plugin support.

Interestingly, despite so much news, it wasn’t Gradle that was talked about the most when it came to JVM build systems. After all, it turns out that there is a new player on the market that has quite an interesting concept on its own – bld from RIFE2.

Life can be hard, Gradle

We’ll start with what RIFE2 is. The project may not be as widely known as some other Java web frameworks, but despite the IMHO awful name, its creators are trying to break through to the mass consciousness – and for the first time they have succeeded. They have shown bld, in which they decided to approach the topic of building applications a bit differently from the competition. Both Gradle as well as Maven rely on creating build configurations in a declarative way, whether using DSL (in Groovy or Kotlin) or XML. bld, on the other hand, allows developers to write build logic in pure Java. The bld system was designed with simplicity of use in mind, emphasizing direct process definition, and avoiding the auto-magic elements often found in other build tools such as Gradle. The philosophy behind the tool strongly resembles that adopted by AWS in its Cloud Development Kit.

Unlike Gradle, which calculates an execution plan beforehand, bld immediately executes tasks when they are defined. This difference in approach reduces the cognitive load on developers by making it easier to reason about build processes. In addition, bld requires Java 17 while providing autocomplete and Javadoc support. Integration with Java ensures that build logic can be understood and maintained in the same ecosystem as application code.

Here, you have example usage as a reference:

public MyappBuild() {
        pkg = "com.example";
        name = "Myapp";
        mainClass = "com.example.MyappMain";
        version = version(0,1,0);

        downloadSources = true;
        repositories = List.of(MAVEN_CENTRAL, RIFE2_RELEASES);
        scope(test)
            .include(dependency("org.junit.jupiter",
                                "junit-jupiter",
                                version(5,9,2)))
            .include(dependency("org.junit.platform",
                                "junit-platform-console-standalone",
                                version(1,9,2)));
    }

My comment: I’m torn between “build system should be boring and predictable” and the fact that CDK has a mass of fans and has become the de facto standard in the AWS world, based on a very similar philosophy. I’m also afraid of a heavily leaked abstraction and the fact that without community support it will be hard to unseat such an ecosystem – I’ve already written more than once about how even converting Groovy SDK to Kotlin SDK can be painful – which is trivial in comparison to supporting whole new tool with a different philosophy behind it. That doesn’t change the fact that I’ve put bld on my radar and would love to try it out someday.

Discover more IT content selected for you
In Vived, you will find articles handpicked by devs. Download the app and read the good stuff!

phone newsletter image

3. Why does Java need strong platform encapsulation?

And finally, JEP, because how could you do without JEP. We’ll deal with the candidate ones next week (and there are a couple of them), but today I wanted to introduce a new draft: JEP draft: Integrity and Strong Encapsulation. It’s a so-called informative JEP, or Design Doc of sorts, outlining certain design assumptions guiding the developers. It is worth reading, as it touches on a very interesting topic: the integrity of the entire platform, as well as encapsulation of its internals.

Integrity and encapsulation in programming languages refer to the concepts of preserving the consistency of data and restricting access to certain parts of an object, respectively. Integrity ensures that data and operations within a program follow specific rules and constraints, preventing undesired changes or behaviors. Encapsulation, on the other hand, hides the internal workings of an object and exposes only a well-defined interface, allowing developers to interact with the object without knowing or affecting its internal state. Both concepts promote modularity, maintainability, and robustness in software design.

Everyone knows that if you mark a field in a Java class as private, no one outside that class can access it. However, Reflection and other mechanisms have been able to bypass this behavior over the years, such as by using the java.lang.reflect.AccessibleObject.setAccessible method. The reasons for breaking encapsulation are many and varied, such as gaining access to unexposed functionality e.g. for testing purposes, bypassing bugs (I myself once did this with a test version of AWS SDK 2.0 when it was still in beta), or improving performance – here the flagship example is sun.misc.Unsafe, which for years has been cited as the flagship example of breaking security in the JVM. From the point of view of JDK developers, such a situation is unenviable – it’s very difficult to evolve a platform when you’re not sure if some refactoring won’t break some popular library (Hyrum’s law is at a standstill). So it was decided to clean up this wild west a bit and improve the situation.

But, you know, truly tidy up

Strong encapsulation is the key to solving problems. Work on its introduction into Java began in 2010+, but its importance is becoming clearer with each passing year. Java 9, with the introduction of modules, enforced strong encapsulation at compile time, but for backward compatibility reasons (and the lack of a meaningful alternative to some interval solutions) allowed deep reflection (the aforementioned setAccessible) at application execution time with warnings. However, the situation evolved over the years, and official replacements for internal JDK classes (like VarHandle) reduced the need to break encapsulation, while new APIs made older practices obsolete. With JDK 16, usage began to enforce strong encapsulation when trying to get into JDK internals (at least partially, the aforementioned sun.misc.Unsafe remained available), turning warnings into errors.

Library authors using the mentioned internals.

Despite many steps forward, the assumed level of integrity in the Java platform has not yet been achieved, precisely because of the lack of widespread strong encapsulation. Some APIs, such as sun.misc.Unsafe or JNI, allow encapsulation and system integrity to be compromised. As a result, analyzing the security of the application and its dependencies becomes difficult, and updating the JDK version can be problematic. To achieve integrity, there are plans to gradually reduce these APIs and close the loopholes in future JEPs, which will require libraries to adapt to the changes. That’s where this JEP comes from – the developers want the entire community to be aware of their plans and be able to prepare for them, as well as discuss the consequences.

Finally, it’s worth wondering – if we’ve managed without this strong encapsulation for so long, why the increased interest in this topic now? It turns out that the reason is that more and more of the Java runtime environment is written in Java, so the developers themselves have a growing need for strong encapsulation. JDK maintenance was hampered by outdated packages, and migration between JDK versions was becoming problematic. Despite the superpower that the lack of internal integrity offered to some libraries, the situation is untenable in the long run.

If I were the creators of JDK, I would call the whole initiative the “Kryptonite Project”

This was kind of my TLDR – Of course, as is always the case with Ron Pressler’s JEPs, the whole thing contains a ton of additional details that I had to simplify a bit in my abbreviated description. So treat the whole thing as a teaser, and for any additional details, I refer you to JEP draft: Integrity and Strong Encapsulation.