3

Build your own framework using an annotation processor

 1 year ago
source link: https://medium.com/virtuslab/build-your-own-framework-using-an-annotation-processor-9824be4fb9a7
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Build your own framework using an annotation processor

1*bhIylM4TN5Qxz_few4vgDw.jpeg

A majority of developers in the JVM world work on various web applications, most of which are based on a framework like Spring or Micronaut. However, some people state that frameworks produce a too big overhead. I decided to see how valid such claims are and how much work is necessary to replicate what frameworks provide us out-of-the-box.

This article isn’t about whether or not it is feasible to use a framework or when to use it. It is about writing your framework — tinkering is the best way of learning!

For the sake of simplicity, we will use a demo app code. The application consists of

  • Singular service
  • Two repositories
  • Two POJOs

No framework

The starting point of an application without a framework would look like the code below:

As we can see, the application’s main method is responsible for providing the implementation of interfaces that ManualTransactionParticipationService depends on. The developer must know which ParticipationService implementation should be created in the main method. When using a framework, programmers typically don’t need to create instances and dependencies on their own. They rely on the core feature of the frameworks — Dependency Injection.

So, let’s take a look at a simple implementation of the dependency injection container based on annotation processing.

What is a Dependency Injection?

Dependency Injection Pattern

Dependency Injection, or DI, is a pattern for providing class instances its instance variables (its dependencies).

But how is this done? The pattern separates responsibility for object creation from its usage. The required objects are provided (“injected”) during runtime, and the pattern’s implementation handles the creation and lifecycle of the dependencies.

The feature has its advantages, like decreased coupling, simplified testing, and increased flexibility. But also drawbacks: framework dependence, harder debugging, or more work at the beginning of the project.

NOTE: Dependency Injection is the implementation of Inversion of control!

Available Dependency Injection solutions

There are at least a few DI frameworks widely adopted in the Java world.

  • Spring — DI was the initial part of this project, and it’s still the core concept for the framework.
  • Guice — Google’s DI framework/library.
  • Dagger — popular in the Android world.
  • Micronaut — part of the framework.
  • Quarkus — part of the framework.
  • Java/Jakarta CDI — standard DI framework that originates in Java EE 6.

Most of these DI frameworks use annotations as one of the possible ways to configure the bindings. By bindings, I mean the configuration of which implementations should be used for interfaces or which dependencies should be provided to create objects.

In fact, DI is so popular that there was a Java Specification Request made for it.

Annotations handling

Runtime-based handling

Spring, the most popular Java framework, processes annotations in runtime. The solution is heavily based on the reflection mechanism. The reflection-based approach is one of the possible ways to handle annotations, and if you would like to follow that lead, please refer to Java Own Framework — step by step.

Compile-based handling

In addition to runtime handling, there is another approach. The part of the dependency injection can happen during annotation processing, a process that occurs during compile time. It has become popular lately thanks to Micronaut and Quarkus as they utilise the approach.

Annotation processing isn’t just for dependency injection. It is a part of various tools. For example, in libraries like Lombok or MapStruct.

Annotation Processing and Processors

The purpose of annotation processing is to generate not modify files. It can also make some compile-time checks, like ensuring that all class fields are final. If something is wrong, the processor may fail the compilation and provide the programmer with information about an error.

Annotation processors are written in Java and are used by javac during the compilation. However, programmers must compile the processor before using it. It cannot directly process itself.

The processing happens in rounds. In every round, the compiler searches for annotated elements. Then the compiler matches annotated elements to the processors that declared being interested in processing them. Any generated files become input for the next round of the compilation. If there are no more files to process, the compilation ends.

How to observe the work of annotation processors

There are two compiler flags -XprintProcessorInfo and -XprintRounds that will present the information about the compilation process and the compilation rounds.

You can find an example config for Gradle here.

How to write an annotation processor

To write an annotation processor, you must create the Processor interface implementation.

The Processor defines six methods, which is a lot to implement. Fortunately, the tool’s creator prepared the AbstractProcessor to be extended and to simplify a programmer’s job. The AbstractProcessor’s API is slightly different from the Processor’s and provides some default implementations of the methods.

Once the implementation is ready, you must notify the compiler to use your processor. The javac has some flags for annotation processing, but this is not how you should work with it. To notify the compiler about the processor, you must specify its name in META-INF/services/javax.annotation.processing.Processor file. The name must be fully qualified, and the file can contain more than one processor. The latter approach works with the build tools. No one builds their project using javac, right?

Build tools support

The build tools like Maven or Gradle have support for using the processors.

Creating your own DI framework

As mentioned above, the Java Own Framework — step by step article covers how the DI’s runtime annotation processing works. As a counterpart, I will gladly show the basic compile-time framework. This approach has some advantages over the ‘classic’ one. You can read more about it in the Micronaut release notes. Neither the framework we are building nor Micronaut is reflection-free, but it relies on it partially and in a limited manner.

Note: An annotation processor is a flexible tool. The presented solution is highly unlikely to be the only option.

Here comes the main dish of the repository. We are going to build our DI framework together. The goal is to make the code below work.

We can make some assumptions based on the code above. First, we need the framework to provide annotations for pointing classes. I decided to use the standardised jakarta.inject.* library for annotation. To be more precise, just the jakarta.inject.Singleton. The same is used by Micronaut.

The second thing we can be sure about is that we need a BeanProvider. The frameworks like to refer to it using the wordContext, like ApplicationContext.

The third necessary thing is an annotation processor that will process the mentioned annotation(s). It should produce classes allowing the framework to provide the expected dependencies in runtime.

The framework should use the reflection mechanism as little as possible.

For the sake of simplicity, we would assume the framework:

  • handles concrete classes annotated with @Singleton that have one constructor only,
  • utilises the singleton scope (each bean will have only one instance for a given BeanProvider).

How should the framework work?

The annotation processing approach is powerful and offers many ways to achieve the goal. Therefore, the design is the point where we should start. We will begin with a basic version, which we will develop gradually as the article develops.

The diagram below shows the high-level architecture of the desired solution.

0*vkdq2qLLPDGMRssP

Framework “class” diagram

As you can see, we need a BeanProcessor to generate implementations of the BeanDefinition for each bean. Then the BeanDefinitions are picked by BaseBeanProvider, which implements the BeanProvider (not in the diagram). In the application code, we use the BaseBeanProvider, created for us by the BeanProviderFactory. We also use the ScopeProvider interface that is supposed to handle the scope of the bean lifespan. In the example, as mentioned, we only care about the singleton scope.

Implementation of the framework

The framework itself is placed in the Gradle subproject called framework.

Basic interfaces

Let’s start with the BeanDefinition interface.

The interface only has two methods: type() to provide a Class object for the bean class and one to build the bean itself. The create(…) method accepts the BeanProvider to get its dependencies needed during build time as it is not supposed to create them, hence the DI.

The framework will also need the BeanProvider interface with just two methods.

The provideAll(…) method provides all beans that match the parameter Class<T> beanType. By match, I mean that the given bean is a subtype or is the same type as the given beanType. The provide(…) method is almost the same but provides only one matching bean. An exception is thrown in the case of no beans or more than one bean.

Annotation processor

We expect the annotation processor to find classes annotated with @Singleton. Then, check if they are valid (no interfaces, abstract classes, just one constructor). The final step is creating the implementation of the BeanDefinition for each annotated class.

So we should start by implementing it, right?

The test-driven-development would object. We will get back to the tests later. Now, let’s focus on implementation.

Step 1 — define the processor

Let’s define our processor:

Our processor will extend the provided AbstractProcessor instead of fully implementing the Processor interface.

The actual implementation differs from what you are seeing. Don’t worry; it will be used to the full extent in the next step. The simplified version shown here is enough to do the actual DI work.

Step 2 — add annotations!?

Thanks to the usage of the AbstractProcess, we don’t have to override any methods. The annotations can be used instead:

  1. @SupportedAnnotationTypes corresponds to Processor.getSupportedAnnotationTypes and is used to build the returned value. As defined, the processor cares only for @jakarta.inject.Singleton.
  2. @SupportedSourceVersion(SourceVersion.RELEASE_17) corresponds to Processor.getSupportedSourceVersion and is used to build the returned value. The processor will support language up to the level of Java 17.

Step 3 — override the process method

Please assume that the code below is included in the BeanProcessor class body.

  1. The annotations param provides a set of annotations represented as Elements. The annotations are represented at least by the TypeElements interface. It may seem unusual, as everyone is used to java.lang.Class or broader java.lang.reflect.Type, which is runtime representations.
    On the other hand, there is also the compile-time representation. Let me introduce the Element interface, the common interface for all language-level compile-time constructs such as classes, modules, variables and packages. It is worth mentioning that there are subtypes corresponding to the constructs like PackageElement or TypeElement.
    The processor code is going to use the Elements a lot.
  2. As the processor should catch any exception and log it, we will use the try and catch clauses here. The BeanProcessor.processBeans method will provide the actual annotation processing.
  3. The annotation processor framework provides the Messager instance to the user through the processingEnv field of AbstractProcessor. The Messager is a way to report any errors, warnings, etc.
    It defines four overloaded methods printMessage(…), and the first parameter of the methods is used to define message type using Diagnostic.Kind enum. In the code, there is an example of an error message.
    If a processor throws an exception, the compilation will fail without extra diagnostic data.
  4. There is no need to claim the annotations, so the method returns false.

Step 4 — write the actual processing

  1. First, the RoundEnvironment is used to provide all elements from the compilation round annotated with @Singleton.
  2. Then the ElementFilter is used to get only TypeElements out of annotated. It could be wise to fail here when annotated differs in size from types, but one can annotate anything with @Singleton, and we don’t want to handle that. Therefore, we won’t care for anything other than TypeElements. They represent class and interface elements during compilation.
    The ElementFilter is a utility class that filters Iterable<? extends Element> or Set<? extends Element> to get elements matching criteria with type narrowed to matching Element implementation.
  3. As the next step, we instantiate the TypeDependencyResolver, which is part of our framework. The class is responsible for getting the type element, checking if it has only one constructor and what are the constructor parameters. We will cover its code later on.
  4. Then we resolve our dependencies using the TypeResolver to be able to build our BeanDefinition instance.
  5. The last thing to do is write Java files with definitions. We will cover it in Step 5.

Getting back to the TypeDefinitionResolver, the code below shows the implementation:

  1. The ElementFilter, which we’re already familiar with, gets the constructors of the element.
  2. A check is carried out to ensure our element has just one constructor.
  3. If there is one constructor, we follow the process.
  4. In case there is more than one, the compilation fails. You can see the failOnTooManyConstructors method implementation here.
    The single constructor creates a Dependency object with the element and its dependencies. It will be used for writing the actual Java code. Seeing the Dependency implementation would be beneficial, so please take a look:

You may have noticed the strange type TypeMirror. It represents a type in Java language (literally language, as this is a compile-time thing).

Step 5 — Writing definitionsHow can I write Java source code?

To write Java code during annotation processing, you can use almost anything. You are good to go as long as you end up with CharSequence/String/byte[].

In examples on the Internet, you will find that it is popular to use StringBuffer. Honestly, I find it inconvenient to write any source code like that. There is a better solution available for us.

JavaPoet is a library for writing Java source code using JavaAPI. You will see it in action in the next section.

Missing part of BeanProcessor

Getting back to BeanProcessor. Some parts of the file were not revealed yet. Let us get back to it:

The writing is done in two steps:

  1. The DefinitionWriter creates the BeanDefinition, and a JavaFile instance contains it.
  2. The programmer writes the implementation to the actual file using provided via processingEnv Filer instance. Should writing fail, the compilation will fail, and the compiler will print the error message.

Filer is an interface that supports file creation for an annotation processor. The place for the generated files to be stored is configured through the -s javac flag. However, most of the time, build tools handle it for you. In that case, the files are stored in a directory like build/generated/sources/annotationProcessor/java for Gradle or similar for different tools.

The creation of Java code takes place in DefinitionWriter, and you will see the implementation in a moment. However, the question is what such a definition looks like. I think an example will show it best.

An example of what should be written

For the below Bean:

The definition should look like the code below:

There are four elements here:

  1. An inconvenient name to prevent people from using it directly. The class should implement BeanDefinition<BeanType>.
  2. A field of type ScopeProvider, responsible for instantiation of bean and ensuring its lifetime (scope). Singleton scope is the only scope the framework covers, so the ScopeProvider.singletonScope() method will be the only one used.
    The Function<BeanProvider, Bean>, used to instantiate the bean is passed to the method ScopeProvider.singletonScope.
    I will cover the implementation of the ScopeProvider later. For now, it is enough to know that it will ensure just one instance of the bean in our DI context.
    However, if you are curious, the source code is available here.
  3. The actual create method uses the provider and connects it with the beanProvider through the apply method.
  4. The implementation of the type method is a simple task.

The example shows that the only bean-specific things are the type passed to BeanDefinition declaration, new call, and field/returned types.

Implementation of the DefinitionWriter

To keep this concise, I will omit the private methods’ code, the constructor and some small snippets. Let us see the overview of Java code that writes Java code. Here is a link to the full code.

Phew, that is a lot. Don’t be afraid; it’s simpler than it looks.

  1. There are three instance fields:
    - TypeElement definedClass is our bean,
    - List<TypeMirror> constructorParameterTypes contains parameters for bean constructor (who would guess, right?),
    - ClassName definedClassName is the JavaPoet object, created out of definedClass. It represents a fully qualified name for classes.
  2. TypeSpec is a JavaPoet class representing Java type creation (classes and interfaces). It is created using the classBuilder static method, in which we pass our strange name, constructed based on the actual bean type name.
  3. ParameterizedTypeName.get(ClassName.get(BeanDefinition.class), definedClassName) creates code that represents BeanDefinition<BeanTypeName>, which is applied as a super interface of our class through the addSuperinterface method.
  4. The create() method implementation is not that hard, and it’s quite self-explanatory. Please look at the createMethodSpec() method and its application.
  5. The same applies to the type() method as for the create().
  6. The scopeProvider() is similar to the previous methods. However, the tricky part is to invoke the constructor. The singletonScopeInitializer() is responsible for creating a constructor call wrapped in ScopeProvider.singletonScope(beanProvider -> …). We call BeanProvider.provide for every parameter to get the dependency and keep the calls in the order of the constructor parameters.

Ok, the BeanDefinitions are ready. Now, we move on to the ScopeProvider.

ScopeProvider implementation

  1. You can see the sealed interface definition that extends Function<BeanProvider, T>. So the Function.apply()method is available.
  2. Factory method for SingletonProvider
  3. Implementation of the SingletonScope is based on any kind of lazy value implementation in Java. In the synchronized apply method, we only create the instance of our bean if there isn’t one. The value field is marked as volatile to prevent issues in a multithreaded environment.

Now we are ready. It is time for the runtime part of the framework.

Step 6 — runtime provisioning of beans

Runtime provisioning is the last part of the framework to work on. The BeanProvider interface has already been defined. Now we just need the implementation to do the actual provisioning.

The BaseBeanProvider must have access to all instantiated BeanDefinitions. This is because the BaseBeanProvidershouldn’t be responsible for creating and providing the beans.

The BeanProviderFactory

Due to the mentioned fact, the BeanProviderFactory took responsibility via the static BeanProvider getInstance(String… packages) method. Where packages parameter defines places to look for the BeanDefinitions present on the classpath. This is the code:

  1. The method is responsible for getting an instance of the BeanProvider.
  2. Here is where it gets interesting. I define constant TYPE_QUERY with a very specific type from the Reflections library. The project README.md defines it as:
    Reflections scans and indexes your project’s classpath metadata, allowing reverse transitive query of the type system on runtime.
    I encourage you to read more about it, but I will just explain how it is used in the code. The defined QueryFunctionwill be used to scan the classpath in runtime to find all subtypes of the BeanDefinition.
  3. The configuration is created for the Reflections object. It will be used in the next part of the code.
  4. The configuration is defined by the parameters and the package filter that the BeanProviderFactory will scan the io.jd package and the passed packages. Thanks to that, the framework only provides beans from the expected packages.
  5. The Reflections object is created. It will be responsible for performing our query later in the code.
  6. The reflections object performs the TYPE_QUERY. It will create all the BeanDefinition instances using static BeanDefinition<?> getInstance(Class<?> e).
  7. The method that creates instances of BeanDefinition uses the reflection. When there’s an exception, the code wraps it in a custom RuntimeException. The code of the custom exception is here.
  8. The instance of BeanProvider interface in the form of BaseBeanProvider instance, which source will be presented in the next few paragraphs.

BaseBeanProviderSo, how is the BaseBeanProvider implemented? It is easy to embrace. The source code in the repository is very similar, but (Spoiler alert!) changed to handle @Transactional in the next article.

  1. provideAll(Class<T> beanType) takes all of the BeanDefinition and finds all type() methods, which return Class<?> that is a subtype or exactly provided beanType. Thanks to that, it can collect all matching beans.
  2. provide(Class<T> beanType) is also simple. It reuses the provideAll method and then takes all matching beans.
  3. The piece of code makes check if there is any bean matching the beanType and throws an exception if not.
  4. The piece of code makes check if there is more than one bean matching the beanType and throw an exception if yes.
  5. If there is just one matching bean, it is returned.

That’s it!

We got all the parts. Now we should check if the code works.

Did we miss something?

Shouldn’t we have started with tests of the annotation processor? How can the annotation processor be tested?

Annotation processor testing

The annotation processor is rather poorly prepared for being tested. One way to test it is to create a separate project/Gradle or Maven submodule. It would then use the annotation processor, and compilation failure would mean something is wrong. It doesn’t sound good, right?

The other option is to utilise the compile-testing library created by Google. It simplifies the testing process, even though the tool isn’t perfect. Please find the tutorial on how to use it here.

I introduced both approaches in the article’s repository. The compile-testing was used for “unit tests”, and the integrationTest module was used for “integration tests”.

You can find the test implementation and configuration in the framework subproject’s files below:

Step 7 — A working framework

In the beginning, there was NoFrameworkApp:

If the main is run, we got the three lines printed:

It looks like this with FrameworkApp:

However, to make it work, we have to add @Singleton here and there. Please refer to the source code in the directory. If we run that main, we will get the same result:

Therefore, we can call it a success. The framework works like a charm!

What’s next?

Once you checked the result of running the code from the previous paragraph, you saw there were additional messages. They are about the beginning and committing a transaction.

Handling the transactions is also typical for frameworks. I will cover how to handle transactions in the next article of this series.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK