14

How Absinthe Uses Compilation Callbacks for Schema Validation in Elixir

 3 years ago
source link: https://blog.appsignal.com/2021/01/19/how-absinthe-uses-compilation-callbacks-for-schema-validation-in-elixir.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

How Absinthe Uses Compilation Callbacks for Schema Validation in Elixir

Devon Estes on Jan 19, 2021

“I absolutely love AppSignal.”


Discover AppSignal

Absinthe manages to do a lot of interesting things during its compilation process, and today we’re going to look a bit at how that works. We’ll look closely at how it uses some metaprograming tricks and module attributes to provide compile-time schema validation for us.

It’s pretty amazing (to me, at least) that when we use Absinthe, we can have a really simple, easy-to-use API to define our schema and we still get a good amount of compile-time type checking out of it! For example, if we try and use a type that hasn’t yet been defined, we’ll see an error like this in our terminal when we try and compile our application:

== Compilation error in file lib/blog_web/schema.ex ==
** (Absinthe.Schema.Error) Invalid schema:
/home/devon/sandbox/absinthe_tutorial/lib/blog_web/schema/account_types.ex:10: User_state :custom_enum is not defined in your schema.

  Types must exist if referenced.


    (absinthe 1.4.16) lib/absinthe/schema.ex:271: Absinthe.Schema.__after_compile__/2
    (stdlib 3.13.2) lists.erl:1267: :lists.foldl/3
    (stdlib 3.13.2) erl_eval.erl:680: :erl_eval.do_apply/6
    (elixir 1.11.2) lib/kernel/parallel_compiler.ex:314: anonymous fn/4 in Kernel.ParallelCompiler.spawn_workers/7

The fact that this happens is cool on its own, but how they manage to do this is what I think is really cool. It takes a lot of really tricky (but interesting) usage of modules and module attributes to make it work, and that’s what we’ll be covering today. But before we can get to the actual type checking, we need to take a quick look at how one defines a schema with Absinthe, and then how that schema is compiled to create those modules and module attributes using Elixir compilation callbacks.

Defining a Schema with Absinthe

To define our GraphQL schema using Absinthe, we need to write a single module in which that schema is declared, and in that module we need to use Absinthe.Schema. If your schema is small enough then doing that in one file is easy enough:

defmodule BlogWeb.Schema do
  use Absinthe.Schema

  alias BlogWeb.Resolvers

  object :user do
    field :id, :id
    field :name, :string
    field :posts, list_of(:post) do
      resolve &Resolvers.Content.list_posts/3
    end
  end

  object :post do
    field :id, non_null(:id)
    field :title, non_null(:string)
    field :body, non_null(:string)
    field :user, non_null(:user)
  end

  input_object :post_params do
    field :id, non_null(:id)
    field :title, non_null(:string)
    field :body, non_null(:string)
    field :user_id, non_null(:id)
  end

  query do
    field :posts, list_of(:post) do
      resolve(&Resolvers.Content.list_posts/3)
    end
  end

  mutation do
    field :create_post, :post do
      arg(:params, non_null(:post_params))
      resolve(&Resolvers.Content.create_post/3)
    end

    field :update_post, :post do
      arg(:params, non_null(:post_params))
      resolve(&Resolvers.Content.update_post/3)
    end

    field :delete_post, :post do
      arg(:id, non_null(:id))
      resolve(&Resolvers.Content.delete_post/3)
    end
  end
end

However, once you start building out your application and things get bigger, you generally end up breaking the schema up into multiple “schema fragment” files and importing the types defined in those fragments into your schema using the Absinthe.Schema.Notation.import_types/2 and the Absinthe.Schema.Notation.import_fields/2 macros.

To do that with our schema above we might end up doing something like what is below, with each set of types defined in its own module, each of which calls use Absinthe.Schema.Notation. We can imagine that each module is defined in its own file, although they technically don’t need to be:

defmodule BlogWeb.Schema.UserTypes do
  use Absinthe.Schema.Notation

  alias BlogWeb.Resolvers

  object :user do
    field :id, :id
    field :name, :string
    field :posts, list_of(:post) do
      resolve &Resolvers.Content.list_posts/3
    end
  end
end

defmodule BlogWeb.Schema.PostTypes do
  use Absinthe.Schema.Notation

  alias BlogWeb.Resolvers

  object :post do
    field :id, non_null(:id)
    field :title, non_null(:string)
    field :body, non_null(:string)
    field :user, non_null(:user)
  end

  input_object :post_params do
    field :id, non_null(:id)
    field :title, non_null(:string)
    field :body, non_null(:string)
    field :user_id, non_null(:id)
  end

  object :post_queries do
    field :posts, list_of(:post) do
      resolve(&Resolvers.Content.list_posts/3)
    end
  end

  object :post_mutations do
    field :create_post, :post do
      arg(:params, non_null(:post_params))
      resolve(&Resolvers.Content.create_post/3)
    end

    field :update_post, :post do
      arg(:params, non_null(:post_params))
      resolve(&Resolvers.Content.update_post/3)
    end

    field :delete_post, :post do
      arg(:id, non_null(:id))
      resolve(&Resolvers.Content.delete_post/3)
    end
  end
end

defmodule BlogWeb.Schema do
  use Absinthe.Schema

  import_types(Absinthe.Type.Custom)
  import_types(BlogWeb.Schema.UserTypes)
  import_types(BlogWeb.Schema.PostTypes)

  alias BlogWeb.Resolvers

  query do
    import_fields(:post_queries)
  end

  mutation do
    import_fields(:post_mutations)
  end
end

But how does Absinthe know that when we’re referencing the :post type in the definition of our :user type, the :post is a valid type to use? Well, that’s where the fun stuff come in!

How Elixir’s Compilation Callbacks Work

Well, to know how Absinthe works its magic, first we need to know a bit about Elixir’s compilation callbacks. A compilation callback is, as it sounds, a function that is executed either before, during, or after compilation takes place. There are a three compilation callbacks, but the two we care about for today are the @before_compile and @after_compile callbacks.

These are two functions that are called, as you would assume, before and after compilation of a module. The before_compile callback receives as an argument the compilation __ENV__, which is a struct containing information about the compilation process. More info on what exactly is in there can be found in the docs for Macro.Env. Likewise, the after_compile callback receives that same compilation __ENV__, and also the compiled bytecode for the module.

These two callbacks give us the opportunity to set up some things that might be needed for compilation in our before_compile callback, and then some checking of things that have just been compiled in our after_compile callback. That’s exactly how Absinthe uses those two features for its schema compilation and schema validation.

How Absinthe Does Schema Validation at Compile Time

So, what exactly is Absinthe doing when it compiles? Well, let’s start with the compilation of those schema fragments. Absinthe.Schema.Notation contains a definition of a __before_compile__/1 function which is used as the handler for the @before_compile callback for each of those schema fragments.

defmacro __before_compile__(env) do
  module_attribute_descs =
    env.module
    |> Module.get_attribute(:absinthe_desc)
    |> Map.new()

  attrs =
    env.module
    |> Module.get_attribute(:absinthe_blueprint)
    |> List.insert_at(0, :close)
    |> reverse_with_descs(module_attribute_descs)

  imports =
    (Module.get_attribute(env.module, :__absinthe_type_imports__) || [])
    |> Enum.uniq()
    |> Enum.map(fn
      module when is_atom(module) -> {module, []}
      other -> other
    end)

  schema_def = %Schema.SchemaDefinition{
    imports: imports,
    module: env.module,
    __reference__: %{
      location: %{file: env.file, line: 0}
    }
  }

  blueprint =
    attrs
    |> List.insert_at(1, schema_def)
    |> Absinthe.Blueprint.Schema.build()

  [schema] = blueprint.schema_definitions

  {schema, functions} = lift_functions(schema, env.module)

  sdl_definitions =
    (Module.get_attribute(env.module, :__absinthe_sdl_definitions__) || [])
    |> List.flatten()
    |> Enum.map(fn definition ->
      Absinthe.Blueprint.prewalk(definition, fn
        %{module: _} = node ->
          %{node | module: env.module}

        node ->
          node
      end)
    end)

  {sdl_directive_definitions, sdl_type_definitions} =
    Enum.split_with(sdl_definitions, fn
      %Absinthe.Blueprint.Schema.DirectiveDefinition{} ->
        true

      _ ->
        false
    end)

  schema =
    schema
    |> Map.update!(:type_definitions, &(sdl_type_definitions ++ &1))
    |> Map.update!(:directive_definitions, &(sdl_directive_definitions ++ &1))

  blueprint = %{blueprint | schema_definitions: [schema]}

  quote do
    unquote(__MODULE__).noop(@desc)

    def __absinthe_blueprint__ do
      unquote(Macro.escape(blueprint, unquote: true))
    end

    unquote_splicing(functions)
  end
end

At first the code in that function might be tricky to understand, but the most important part of understanding what’s going on there is looking at the definition of the __absinthe_blueprint__/0 function. We can see that we’re defining a function that returns a map, and that map contains a lot of information about the state of things before the current schema fragment was compiled. This __absinthe_blueprint__/0 function will be really important in the final compilation step that we’ll look at in a bit.

One other really intersting thing about this code this is important to notice is how many calls to Module.get_attribute/2 there are! This is one of the things that Absinthe leans on heavily for this compilation process - the use of modules and module attributes as essentially defining global variables that can be accessed by other modules during their compilation! There are a lot of calls to Module.get_attribute/2 and Module.put_attribute/3 in this module, and recognizing this pattern helps us put the rest of the process into context.

The other thing happening here is that we’re defining a lot of functions in a dynamically named module! These functions contain yet more information, and we can see a bit more of how this is used in the __before_compile__/1 function defined in Absinthe.Schema:

defmacro __before_compile__(_) do
  quote do
    @doc false
    def __absinthe_pipeline_modifiers__ do
      [@schema_provider] ++ @pipeline_modifier
    end

    def __absinthe_schema_provider__ do
      @schema_provider
    end

    def __absinthe_type__(name) do
      @schema_provider.__absinthe_type__(__MODULE__, name)
    end

    def __absinthe_directive__(name) do
      @schema_provider.__absinthe_directive__(__MODULE__, name)
    end

    def __absinthe_types__() do
      @schema_provider.__absinthe_types__(__MODULE__)
    end

    def __absinthe_types__(group) do
      @schema_provider.__absinthe_types__(__MODULE__, group)
    end

    def __absinthe_directives__() do
      @schema_provider.__absinthe_directives__(__MODULE__)
    end

    def __absinthe_interface_implementors__() do
      @schema_provider.__absinthe_interface_implementors__(__MODULE__)
    end

    def __absinthe_prototype_schema__() do
      @prototype_schema
    end
  end
end

When each schema fragment is defined, it also defines a module that contains the information about the module that was just defined - so for example, for our BlogWeb.Schema.UserTypes module that we used above, it will define a BlogWeb.Schema.UserTypes.Compiled module. With this convention, it allows Absinthe know where to look for information for each module that was compiled with some schema information.

And now that all that work has been done during the compilation process, we can look at the __after_compile__/2 callback defined in Absinthe.Schema:

def __after_compile__(env, _) do
  prototype_schema =
    env.module
    |> Module.get_attribute(:prototype_schema)

  pipeline =
    env.module
    |> Absinthe.Pipeline.for_schema(prototype_schema: prototype_schema)
    |> apply_modifiers(env.module)

  env.module.__absinthe_blueprint__
  |> Absinthe.Pipeline.run(pipeline)
  |> case do
    {:ok, _, _} ->
      []

    {:error, errors, _} ->
      raise Absinthe.Schema.Error, phase_errors: List.wrap(errors)
  end
end

This is where all that information and all that metaprogramming is actually used for some helpful user features! In short, that callback will use all of the information that’s been stored in various module attributes and exposed by defining all of those different functions in all of those .Compiled modules to build up something that Absinthe calls a blueprint. This blueprint is again what it sounds like - it contains the information for how documents will later by evaluated against the current GraphQL schema during resolution. It then evaluates this blueprint, and if there are any errors returned from that evaluation they’re raised at the end of the compilation process!

Clearly this is kind of a compilcated process, but it’s also a cool way to use some of the basic features of the Elixir compiler to deliver value to users. Exploring this process helped me learn a lot about this method of compilation of applications, but it also made it clear to me that the Absinthe team has put a great deal of time and effort into making this user experience really great, and for that I’m very thankful!

P.S. If you’d like to read Elixir Alchemy posts as soon as they get off the press, subscribe to our Elixir Alchemy newsletter and never miss a single post!

Guest author Devon is a senior Elixir engineer currently working at Sketch. He is also a writer, international conference speaker, and committed supporter of open-source software as a maintainer of Benchee and the Elixir track on Exercism, as well as a frequent contributor to Elixir.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK