Cdacians

Cdacians
Cdacians

Thursday, 7 September 2017

The dark side of Jack and Jill

The dark side of Jack and Jill

Last year Google released new toolchain - Jack (Java Android Compiler Kit) and Jill(Jack Intermediate Library Linker) which is intended to replace existing javac + dxpipeline.
In this article I will try to gather my thoughts and concerns regarding this new toolchain.
But before I start digging deeper into Jack&Jill I want to take a little detour and give you a high level overview of the existing toolchain and the process of compiling your beloved Android app.

Android code compilation 101

To be completely honest, I will not go through entire build process - I will only concentrate on part which is the most relevant to our topic - transforming Java source code into DEX file.
Ever since the first dinosaur stepped on this planet, compilation process was going as follows:
We start with a plain Java source code. And the goal - is to compile this source code into executable instructions which JVM on your device can understand.
For plain Java (not Android) application, we just need a Java compiler (javac). This beast can compile Java source code into Java bytecode (*.class files). Java bytecode can be executed by a regular JVM which (most likely) is running on your machine.
The thing is, that on Android we use a non-standard JVM. We use its modified version which is highly optimized for mobile environment. Such JVM is called Dalvik (or ARTon L+ devices which is even more performant).
So since JVM is modified, Java bytecode needs to be modified as well, so Dalvik can understand it. That's the responsibility of dx tool - it takes Java bytecode (.class files) and transforms it into Android-friendly bytecode (*.dx file)
Its interesting to mention that when you include third party library in your project - it comes as a jar (or aar for Android libs) - which in turn is nothing more than just a zipped collection of *.class files1. So third party lib goes straight to dx tool because we don't need to compile it.
So far its pretty simple, right?

Bytecode manipulation

As time passed and Android developers became more experienced, people started developing cool tools and plugins which can enhance your code at the Java bytecode level (a.k.a bytecode manipulation).
Most popular tools you might have probably heard of:
  • Proguard
  • Jacoco coverage
  • Retrolambda
  • ...and many more
It gave us a cool ability to post-process our code w/o making changes to our original sources. F.i. Proguard can analyze your Java bytecode and remove parts which are not used (also known as minimization). Or Retrolambda replaces Java8 lambdas with anonymous inner classes, so your "lambdas" work on Android VM which does not support Java8 features2.
Here is what it looks like:
Each class (its bytecode) is processed by bytecode manipulation plugin and the result is fed to dx tool to produce final result.

Transform API

As number of such tools started to increase it became obvious that Android Gradle build system was not really designed for bytecode manipulators - the only way to "catch" the moment when Java bytecode is ready, but not yet processed by dx was to add Gradle task dependency to existing task created by Android Gradle plugin. The name of that task was an implementation detail, it was generated dynamically based on project configuration and Google kept changing it as Android Gradle Plugin evolved. This led to the problem that all those plugins kept breaking with every new Android Plugin release.
So Google needed to act. And they did - they introduce Transition API - a simple API which allows you to add a Transformer - class which will be called at the appropriate time of the build process. The input is a Java bytecode. This allows plugin developers to use a much more reliable way of manipulating bytecode and stop using undocumented private APIs.

Jack & Jill

At the same time, somewhere in parallel in a dungeon, group of Googlers were super busy creating something new, something which will blow everybody's mind! Self driving cars! Jack and Jill!
Jack - is a compiler. Similar to javac, but it does a slightly different thing: 
As you can see, Jack compiles Java source code straight into Dex file! We don't have intermediate *.class files anymore, so dx tool is not needed!
But wait! What if I include a third-party library in my project (which comes as a collection of .class files)?
And that's when Jill comes into play:
Jill can process class files and transform them into special Jayce format which can be used as an input for Jack compiler.
So now let's step aside for a second and think... What is going to happen to all those cool plugins we got so addicted to? They all need .class files and Jack compiler doesn't have those anymore...
Luckily, Jack provides some of those important for us features out of the box:
  • Retrolambda - will not be needed. Jack can handle lambdas properly
  • Proguard - it is baked into Jack now, so you can still use obfuscation and minimization
However, list of downsides is a bit concerning:
  • Transform API is not supported by Jack - there is no intermediate Java bytecode you can modify, so some plugins I didn't mention here will stop working
  • Annotation processing is not currently supported by Jack, so if you heavily depend on libraries like Dagger, AutoValue, etc., you should think twice before switching to Jack. EDIT: As pointed out by Jake Wharton, Jack in N Preview has annotation processing support, but it is not exposed yet through Gradle.
  • Lint detectors which operate on a Java bytecode level are not supported
  • Jack is currently slower than javac + dx
  • Jacoco is not supported - well, I personally find Jacoco questionable (it doesnt really show what you want to see), so can totally live without it
  • Dexguard - enterprise version of Proguard is not currently supported
I realize that things I just mentioned are temporary and Google is actively working on addressing them, but unfortunately all that excitement around Android supporting Java8 features will fade pretty soon when people start to realize the real cost behind switching to the new toolchain.
Jack is a really cool move and will give Google much more control and flexibility with the build pipeline, but it is in its very early stage and it will take a while before it will start gaining its popularity.
Always yours,
Pavel Dudka
  1. AAR is actually a bit more than JAR - it also includes Android-related data like assets, resources and other data which regular JAR doesn't support 
  2. Android VM running on latest N supports some of Java8 instructions 
  3. Title Jack'n'Jill image is proudly "borrowed" from here 

Gradle tip #3: Tasks ordering

Gradle tip #3: Tasks ordering


I noticed that the quite often problem I face when I work with Gradle - is tasks ordering (either existing or my custom ones). Apparently my build works better when my tasks are executed at the right moment of the build process :)
So let's dig deeper into how can we change tasks execution order.

dependsOn

I believe the most obvious way of telling your task to execute after some other task - is to use dependsOn method.
Let's consider existing task A and we need to add task B which executes only after task A is executed:
This is probably the easiest thing you can do. Given that tasks A and B are already defined:
task A << {println 'Hello from A'}
task B << {println 'Hello from B'}
What you need to do - is just tell Gradle that task B depends on task A
B.dependsOn A
This means that whenever I try to execute task B - Gradle will take care of executing task A as well:
paveldudka$ gradle B
:A
Hello from A
:B
Hello from B
Alternatively, you could declare such a dependency right inside task configuration section:
task A << {println 'Hello from A'}
task B {
    dependsOn A
    doLast {
        println 'Hello from B'  
    }
}
Result is the same.
But what if we want to insert our task inside already existing task graph?
The process is pretty much the same:
original task graph:
task A << {println 'Hello from A'}
task B << {println 'Hello from B'}
task C << {println 'Hello from C'}

B.dependsOn A
C.dependsOn B
our new custom task:
task B1 << {println 'Hello from B1'}
B1.dependsOn B
C.dependsOn B1
output:
paveldudka$ gradle C
:A
Hello from A
:B
Hello from B
:B1
Hello from B1
:C
Hello from C
Please note, that dependsOn adds task to the set of dependencies. Thus it is totally fine to be dependent on multiple tasks:
task B1 << {println 'Hello from B1'}
B1.dependsOn B
B1.dependsOn Q
output:
paveldudka$ gradle B1
:A
Hello from A
:B
Hello from B
:Q
Hello from Q
:B1
Hello from B1

mustRunAfter

Now imagine that our task depends on 2 other tasks. For this example I decided to use more real-life case. Imagine I have one task for unit tests and another for UI tests. Also I have a task which executes both unit & UI tests:
task unit << {println 'Hello from unit tests'}
task ui << {println 'Hello from UI tests'}
task tests << {println 'Hello from all tests!'}

tests.dependsOn unit
tests.dependsOn ui
output:
paveldudka$ gradle tests
:ui
Hello from UI tests
:unit
Hello from unit tests
:tests
Hello from all tests!
Even though tasks unit and UI tests will be executed before task tests, the order of execution for tasks ui and unit is not determined. Right now I believe they will be executed in alphabetical order, but this behavior is an implementation detail and you definitely should not rely on this fact.
Since UI tests are executing much longer than unit tests, I want my unit tests run first and only if everything OK - proceed to executing UI tests. So what should I do if I want my unit tests run before UI tests?
One way for solving this would be to make UI test task depend on unit test task:
task unit << {println 'Hello from unit tests'}
task ui << {println 'Hello from UI tests'}
task tests << {println 'Hello from all tests!'}

tests.dependsOn unit
tests.dependsOn ui
ui.dependsOn unit // <-- I added this dependency
output
paveldudka$ gradle tests
:unit
Hello from unit tests
:ui
Hello from UI tests
:tests
Hello from all tests!
Now my unit tests are getting executed before UI tests! Great!
BUT! There is one really big fat nasty problem with this approach! My UI tests do not really depend on unit tests. I wanna be able to run my UI tests separately, but now every time I want to run my UI tests - my unit tests will be run as well!
That's where mustRunAfter method comes into play. It tells Gradle to run task aftertask specified as an argument. So essentially, we do not introduce dependency between our unit tests and UI tests, but instead we told Gradle to give unit tests priority if they are executed together, so unit tests are executed before our UI test suite:
task unit << {println 'Hello from unit tests'}
task ui << {println 'Hello from UI tests'}
task tests << {println 'Hello from all tests!'}

tests.dependsOn unit
tests.dependsOn ui
ui.mustRunAfter unit
output
paveldudka$ gradle tests
:unit
Hello from unit tests
:ui
Hello from UI tests
:tests
Hello from all tests!
And the dependency graph looks like:
Notice that we lost explicit dependency between UI tests and unit tests! Now if I decide to run just UI tests - my unit tests won't be executed.
Please note that mustRunAfter is marked as "incubating" (as of Gradle 2.4) which means that this is an experimental feature and its behavior can be changed in future releases.

finalizedBy

Now I have task which runs both UI and unit tests. Great! Let's say each of them produces test report. So I decided to create a task which merges 2 test reports into one:
task unit << {println 'Hello from unit tests'}
task ui << {println 'Hello from UI tests'}
task tests << {println 'Hello from all tests!'}
task mergeReports << {println 'Merging test reports'}

tests.dependsOn unit
tests.dependsOn ui
ui.mustRunAfter unit
mergeReports.dependsOn tests
Now if I want to get test report with both UI & unit tests - I execute mergeReports task:
paveldudka$ gradle mergeReports
:unit
Hello from unit tests
:ui
Hello from UI tests
:tests
Hello from all tests!
:mergeReports
Merging test reports
It works, but... it looks sloppy.. mergeReports task doesn't make a lot of sense from user (by user I mean developer :) ) perspective. I want to be able to execute tests and get merged report. Obviously, I could add merge logic inside tests task, but for the sake of this demo - I want to keep this logic in separate mergeReports task.
finalizedBy method come to the rescue. Its name is quite self-explanatory - it adds finalizer task to this task.
So let's modify our script as follows:
task unit << {println 'Hello from unit tests'}
task ui << {println 'Hello from UI tests'}
task tests << {println 'Hello from all tests!'}
task mergeReports << {println 'Merging test reports'}

tests.dependsOn unit
tests.dependsOn ui
ui.mustRunAfter unit
mergeReports.dependsOn tests

tests.finalizedBy mergeReports
Now I'm able to execute tests task and I still get my merged test report:
paveldudka$ gradle tests
:unit
Hello from unit tests
:ui
Hello from UI tests
:tests
Hello from all tests!
:mergeReports
Merging test reports
Please note that finalizedBy is marked as "incubating" (as of Gradle 2.4) which means that this is an experimental feature and its behavior can be changed in future releases.
This is pretty much it - with these 3 tools you can easily tune your build process!
Happy gradling!

Gradle tip #2: understanding syntax

Gradle tip #2: understanding syntax


In the Part 1 we talked about tasks and different stages of the build lifecycle. But after I published it I realized that before we jump into Gradle specifics it is very important to understand what we are dealing with - understand its syntax and stop being scared when we see complex build.gradle scripts. With this article I will try to fill this missing gap.

Syntax

Gradle build scripts are written in Groovy, so before we start analyzing them, I want to touch (briefly) some key Groovy concepts. Groovy syntax is somewhat similar to Java, so hopefully you won't have much problems understanding it.
If you feel comfortable with Groovy - feel free to skip this section.
There is one important Groovy aspect you need to understand in order to understand Gradle scripts - Closure.

Closures

Closure is a key concept which we need to grasp to better understand Gradle. Closure is a standalone block of code which can take arguments, return values and be assigned to a variable. It is some sort of a mix between Callable interface, Future, function pointer, you name it..
Essentially this is a block of code which is executed when you call it, not when you create it. Let's see a simple Closure example:
def myClosure = { println 'Hello world!' }

//execute our closure
myClosure()

#output: Hello world!
Or here is a closure which accepts a parameter:
def myClosure = {String str -> println str }

//execute our closure
myClosure('Hello world!')

#output: Hello world!
Or if closure accepts only 1 parameter, it can be referenced as it:
def myClosure = {println it }

//execute our closure
myClosure('Hello world!')

#output: Hello world!
Or if closure accepts multiple input parameters:
def myClosure = {String str, int num -> println "$str : $num" }

//execute our closure
myClosure('my string', 21)

#output: my string : 21
By the way, argument types are optional, so example above can be simplified to:
def myClosure = {str, num -> println "$str : $num" }

//execute our closure
myClosure('my string', 21)

#output: my string : 21
One cool feature is that closure can reference variables from the current context (read class). By default, current context - is the class within this closure was created:
def myVar = 'Hello World!'
def myClosure = {println myVar}
myClosure()

#output: Hello world!
Another cool feature is that current context for the closure can be changed by calling Closure#setDelegate(). This feature will become very important later:
def myClosure = {println myVar} //I'm referencing myVar from MyClass class
MyClass m = new MyClass()
myClosure.setDelegate(m)
myClosure()

class MyClass {
    def myVar = 'Hello from MyClass!'
}

#output: Hello from MyClass!
As you can see, at the moment when we created closure, myVar variable doesn't exist. And this is perfectly fine - it should be present in the closure context at the point when we execute this closure.
In this case I modified current context for the closure right before I executed it, so myVar is available.

Pass closure as an argument

The real benefit of having closures - is an ability to pass closure to different methods which helps us to decouple execution logic.
In previous section we already used this feature when passed closure to another class instance. Now we will go through different ways to call method which accepts closure:
  1. method accepts 1 parameter - closure
    myMethod(myClosure)
  2. if method accepts only 1 parameter - parentheses can be omitted
    myMethod myClosure
  3. I can create in-line closure
    myMethod {println 'Hello World'}
  4. method accepts 2 parameters
    myMethod(arg1, myClosure)
  5. or the same as '4', but closure is in-line
    myMethod(arg1, { println 'Hello World' })
  6. if last parameter is closure - it can be moved out of parentheses
    myMethod(arg1) { println 'Hello World' }
At this point I really have to point your attention to example #3 and #6. Doesn't it remind you something from gradle scripts? ;)

Gradle

Now we know mechanics, but how it is related to actual Gradle scripts? Let's take simple Gradle script as an example and try to understand it:
buildscript {
    repositories {
        jcenter()
    }
    dependencies {
        classpath 'com.android.tools.build:gradle:1.2.3'
    }
}

allprojects {
    repositories {
        jcenter()
    }
}
Look at that! Knowing Groovy syntax we can somewhat understand what is happening here!
  • there is (somewhere) a buildscript method which accepts closure:
    def buildscript(Closure closure)
  • there is (somewhere) a allprojects method which accepts closure:
    def allprojects(Closure closure)
...and so on.
This is cool, but this information alone is not particularly helpful... What does "somewhere" mean? We need to know exactly where this method is declared.
And the answer is - Project

Project

This is a key for understanding Gradle scripts:
All top level statements within build script are delegated to Project instance
This means that Project - is the starting point for all my searches.
This being said - let's try to find buildscript method.
If we search for buildscript - we will find buildscript {} script block. But wait.. What the hell is script block??? According to documentation:
A script block is a method call which takes a closure as a parameter
Ok! We found it! That's exactly what happens when we call buildscript { ... } - we execute method buildscript which accepts Closure.
If we keep reading buildscript documentation - it says: Delegates to:
ScriptHandler from buildscript
. It means that execution scope for the closure we pass as an input parameter will be changed to ScriptHandler. In our case we passed closure which executes repositories(Closure) and dependencies(Closure) methods. Since closure is delegated to ScriptHandler, let's try to search for dependencies method within ScriptHandler class.
And here it is - void dependencies(Closure configureClosure), which according to documentation, configures dependencies for the script. Here we are seeing another terminology: Executes the given closure against the DependencyHandler. Which means exactly the same as "delegates to [something]" - this closure will be executed in scope of another class (in our case - DependencyHandler)
"delegates to [something]" and "configures [something]" - 2 statements which mean exactly the same - closure will be execute against specified class.
Gradle extensively uses this delegation strategy, so it is really important to understand terminology here.
For the sake of completeness, let's see what is happening when we execute closure {classpath 'com.android.tools.build:gradle:1.2.3'} within DependencyHandlercontext. According to documentation this class configures dependencies for given configuration and the syntax should be:
<configurationName> <dependencyNotation1>
So with our closure we are configuring configuration with name classpath to use com.android.tools.build:gradle:1.2.3 as a dependency.

Script blocks

By default, there is a set of pre-defined script blocks within Project, but Gradle plugins are allowed to add new script blocks!
It means that if you are seeing something like something { ... } at the top level of your build script and you couldn't find neither script block or method which accepts closure in the documentation - most likely some plugin which you applied added this script block.

android Script block

Let's take a look at the default Android app/build.gradle build script:
apply plugin: 'com.android.application'

android {
    compileSdkVersion 22
    buildToolsVersion "22.0.1"

    defaultConfig {
        applicationId "com.trickyandroid.testapp"
        minSdkVersion 16
        targetSdkVersion 22
        versionCode 1
        versionName "1.0"
    }
    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }
}
As we can see, it seems like there should be android method which accepts Closure as a parameter. But if we try to search for such method in Project documentation - we won't find any. And the reason for that is simple - there is no such method :)
If you look closely to the build script - you can see that before we execute androidmethod - we apply com.android.application plugin! And that's the answer! Android application plugin extends Project object with android script block (which is simply a method which accepts Closure and delegates it to AppExtension class1).
But where can I find Android plugin documentation? And the answer is - you can download documentation from the official Android Tools website (or here is a direct link to documentation).
If we open AppExtension documentation - we will find all the methods and attributes from our build script:
  1. compileSdkVersion 22. if we search for compileSdkVersion we will find property. In this case we assign "22" to property compileSdkVersion
  2. the same story with buildToolsVersion
  3. defaultConfig - is a script block which delegates execution to ProductFlavorclass
  4. .....and so on
So now we have really powerful ability to understand the syntax of Gradle build scripts and search for documentation.

Exercise

With this powerful ability (oh, that's sounds awesome), let's go ahead and try reconfigure something :)
In AppExtension I found script block testOptions which delegates Closure to TestOptions class. Going to TestOptions class we can see that there are 2 properties: reportDir and resultsDir. According to documentation, reportDir is responsible for test report location. Let's change it!
android {
......
    testOptions {
        reportDir "$rootDir/test_reports"
    }
}
Here I used rootDir property from Project class which points to the root project directory.
So now if I execute ./gradlew connectedCheck, my test report will go into [rootProject]/test_reports directory.
Please don't do this in your real project - all build artifacts should go into build dir, so you don't pollute your project structure.
Happy gradling!
P.S. Thanks a lot @Mark Vieira for proof-reading this article!
  1. It is worth mentioning that "com.android.library" plugin delegates closure to "LibraryExtension" class instead of "AppExtension"