8

Ruby Next: Make all Rubies quack alike

 4 years ago
source link: https://evilmartians.com/chronicles/ruby-next-make-all-rubies-quack-alike
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Meet Ruby Next , the first transpiler for Ruby that allows you to use the latest language features, including the experimental ones, today—without the stress of a project-wide version upgrade. Read the story behind the gem, discover its inner workings, and see how it can help in pushing Ruby into the future.

These days, Ruby is evolving faster than ever. The latest minor release of the language, 2.7 , introduced new syntax features like numbered parameters and pattern matching. However, we all know that switching to the next major Ruby version in production or your favorite open-source project is not a walk in the park. As a developer of a production website, you have to deal with thousands of lines of legacy code that might break in different subtle ways after the upgrade. As a gem author, you have to support older versions of the language as well as popular alternative implementations (JRuby, TruffleRuby, etc.)—and it might be a while before they pick up on syntax changes, if ever.

This post is based on a RubyConf talk I gave in 2019.

In this post, I want to introduce a new tool for Ruby developers, Ruby Next , which aims to solve these problems and could also help the members of the Ruby Core team to evaluate experimental features and proposals. Along the way, we’re going to touch the following topics:

  • Ruby versions lifecycle and usage statistics
  • Parsers and un-parsers
  • The process of accepting changes to the language

Why backporting is important

So, why isn’t it possible to update to the latest version of Ruby every Christmas? As we all know, Metz has been unveiling new major releases right on December 25th for years. Besides the fact, that no one should ever update anything on a holiday, it also has to do with respect for your fellow developers.

Check out AnyCable ,TestProf, and  Action Policy if you want to see my open-source.

Take me. I like to think of myself as a library developer first, and an application developer

last after. My unwavering desire to create new gems out of everything is a subject of long-running jokes between my colleagues at Evil Martians. I am currently actively maintaining dozens of gems

—a few of them quite popular. Anyway, that leads us to the first problem.

I have to write code compatible with older versions of Ruby.

Why? Because it’s a good practice to support at least all the officially supported Ruby versions—those who haven’t achieved their end of life (EOL).

According to the Ruby maintenance calendar , Ruby versions that are still very much alive at the time of this publication are 2.5, 2.6, and 2.7. Even though you can still use older versions, it is highly recommended to upgrade as soon as possible—new security vulnerabilities, if found, will not be fixed for EOL releases.

That means I have to wait at least two more years to start using all those 2.7 goodies in my gems without forcing everyone to upgrade.

I am not even sure I’ll be writing Ruby two years from now, I want this bloody pattern matching now!

Let’s imagine I don’t care about users and ship a new release with required_ruby_version = "~> 2.7" . What’s gonna happen?

My gems would lose their audience big time. See the breakdown of Ruby versions according to the recently launched RubyGems.org Stats :

rubygems_stats-20508f4.png

RubyGems.org stats: version breakdowns (Apr 4, 2020)

We can’t even see 2.7 on this chart. Keep in mind that this data is too technical in the sense that it contains stats not only from Ruby applications and developers but also from irrelevant sources, such as, for example, system Rubies (which are usually lagging a few versions behind the latest release).

The more realistic picture of the current state of the Ruby community comes from JetBrains and their annual survey . The data is for 2019, but I don’t think the numbers changed drastically:

jb_survey-355fa77.png

Which version of Ruby do you use the most?

As you can see, 2.3 is no longer the most popular. And still, the latest version (2.6 at the time of the survey) holds only the bronze.

Two or three most recent versions of Ruby are actively used at any time, and the latest is the least popular among them.

Another insight from this survey: “30% reported that they’re not about to switch”.

Why so? Maybe, because upgrading for the sake upgrading seems to be too costly. If it ain’t broken don’t fix it, right?

What could be the right motivation for an upgrade? Performance improvements (every next Ruby version has them)? Not sure. Can new features encourage developers to switch sooner? Sure, if they wouldn’t come at a price (like, for example, keyword arguments deprecations and changes in 2.7 ).

To sum up, the latest language additions could be very attractive in theory but hardly applicable in practice right away.

I decided to find a way to change this to allow everyone to taste modern Ruby in real projects independently of their current environment.

#yield_self, #then transpile

Before we dig into a technical overview of Ruby Next , let me share my personal story of  modernizing Ruby.

It was the winter of 2017, and Ruby 2.5 just appeared under a Christmas tree. One thing from this release caught my attention in a somewhat controversial way: the Kernel#yield_self method. I was a bit skeptical about it. I thought: “Is this a feature that can change the way I write Ruby? Doubt so.”

Anyway, I decided to give it a try and started using it in applications I worked on (luckily, we try to upgrade Ruby as soon as possible, i.e., around x.y.1 release). The more I used this method, the more I liked it.

Eventually, #yield_self appeared in a codebase of one of my gems. And, of course, as soon as it happened—tests for Ruby 2.4 failed. The simplest way to fix them would be monkey-patching the Kernel module and making ann old Ruby quack like a new one.

Being a follower of best practices in gems development (and even an author of one particular checklist ), I knew that monkey-patching is a last resort and a no-go for libraries. In essence, someone else can define a monkey-patched method with the same name and create a conflict. For #yield_self , this is an improbable scenario. But a few months later #then alias has been merged into Ruby trunk

So, we need a monkey-patch that is not a monkey-patch.

And you know what? Ruby has got your covered! This patch looks like:

module YieldSelfThen
  refine BasicObject do
    unless nil.respond_to?(:yield_self)
      def yield_self
        yield self
      end
    end

    alias_method :then, :yield_self
  end
end

Yes, we’re going to talk about refinements for a while.

Doing fine with refine

Refinements can rightly be called the most mindblowing Ruby feature. In short, refinement is a  lexically scoped monkey-patch . Though I don’t think this definition helps understanding what kind of a beast this feature is. Let’s consider an example.

Assume that we have three following scripts:

# succ.rb
def succ(val)
  val.then(&:to_i).then(&:succ)
end

# pred.rb
def pred(val)
  val.then(&:to_i).then(&:pred)
end

# main.rb
require_relative "succ"
require_relative "pred"

puts send(*ARGV[0..1])

If we’re using Ruby 2.6+, we can run them and see the correct result:

$ ruby main.rb succ 14
15

$ ruby main.rb pred 15
14

Trying to run them in Ruby 2.5 would give us the following exception:

$ ruby main.rb succ 14
undefined method `then' for "14":String

Of course, we can replace #then with #yield_self and make everything work as expected. Let’s not do that. Instead, we use the  YieldSelfThen refinement defined above.

Let’s put our refinement code into yield_self_then.rb file and  activate this refinement only in the  succ.rb file:

# main.rb
+ require_relative "yield_self_then"
  require_relative "succ"
  require_relative "pred"
  ...

  # succ.rb
+ using YieldSelfThen
+
  def succ(v)
    v.then(&:to_i).then(&:succ)
  end

Now when we run the succ command, we can see the result:

$ ruby main.rb succ 14
15

The pred command, however, will still fail:

$ ruby main.rb pred 15
undefined method `then' for "14":String

Refinements are great not only for monkey-patching but for performance as well. Check out the examples from this thread in the Sidekiq repo.

Remember the “lexically scoped” part of the refinement definition? We’ve just seen it in action: the extension defined in the YieldSelfThen module via the  refine method is only “visible” in the  succ.rb file, where we added the  using declaration. Other Ruby files for the program do not “see” it; they work if there were no extensions at all.

That means refinements allow us to control monkey-patches, to put them on a leash. So, refinement is a safe monkey-patch.

Refinement is a safe monkey-patch.

Even though refinements have been introduced in Ruby 2.0 (first, as an experimental feature, and stable since 2.1), they didn’t get much traction. There were several main reasons for that:

  • A large number of edge cases in the early days (e.g., no modules support, no  send support). The situation was getting better with every Ruby release, and now refinements are recongnized by the majority of Ruby features (in MRI).
  • The support for refinements in alternative Rubies was lagging. JRuby Core team (and especially, Charles Nutter ) did a great job on improving the situation recently, and since 9.2.9.0 refinements became usable in JRuby.

The most recent enhancement of refinements ( incompatibility with Module#prepend ) was made by Jeremy Evans a few months ago and made it to 2.7 release.

Today I dare to say that all the critical problems with refinements are in the past, and “refinements are experimental and unstable” is no longer a valid argument.

That’s why I bet on refinements to solve the backporting problem.

We can use refinements to backport new APIs safely.

Wondering which new features were added in every minor Ruby release? Check out the Ruby Changes project by Viktor Shepelev .

Before Ruby 2.7, making old Rubies quack like a new one was as simple as adding a universal refinement with all the missing methods.

That’s what the original idea for Ruby Next was— one refinement to rule them all :

# ruby_next_2018.rb
module RubyNext
  unless nil.respond_to?(:yield_self)
    refine BasicObject do
      # ...
    end
  end

  unless [].respond_to?(:difference)
    refine Array do
      # ...
    end
  end

  unless [].respond_to?(:tally)
    refine Enumerable do
      # ...
    end
  end

  # ...
end

# ...and then in your code
using RubyNext

Luckily, I haven’t released this project in 2018. The addition of new features to Ruby 2.7 showed where the refinement approach is lacking: we can not refine syntax. That is when my work on the most exciting part of Ruby Next, a transpiler, began.

“Refining” the syntax, or transpiling

In 2019, the active evolution of Ruby syntax began. A bunch of new features has been merged into master (not all of them survived though): a method reference operator (eventually reverted ), a  pipeline operator (reverted almost immediately ), pattern matching and  numbered parameters .

I’ve been watching this Shinkansen passing by and thinking: “Wouldn’t it be nice to bring all these goodies to my projects and gems? Is it possible at all?” It turned out, it is. And that’s how today’s Ruby Next was born.

In addition to a collection of polyfills—refinements, Ruby Next has acquired another powerful functionality— a transpiler from Ruby to Ruby .

Generally, “transpiler” is a word used to describe source-to-source compilers, i.e., compilers which have the same input and output format. Thus, Ruby Next transpiler “compiles” Ruby code into another Ruby code without the loss of functionality. More precisely, we transform a source code for the lastest/edge Ruby version into a source code compatible with older versions:

transpiler-358d267.png

The number of different Ruby implementations is growing every year. Nowadays we have mruby , JRuby , TruffleRuby , Opal , RubyMotion , Artichoke , Prism . A transpiler could help to use new features without waiting for these implementations to add support.

Transpiling is very popular in the world of front-end development, where we have such tools as Babel for JavaScript and PostCSS for CSS.

The reasons why these tools exist are browser incompatibilities and rapid language evolution (to be more precise, the evolution of specifications). You might be surprised, but we do have the same problems in Ruby, too. We have different “browsers” (Ruby runtimes), and, as we already mentioned, the language is changing fast. Of course, the scale of the problem is not as terrifying as the state of front-end development five years ago, but it’s better to be prepared.

From AST to AST

Let’s make a quick overview of how the Ruby Next transpiler works. Advanced technical details will follow in the future posts (or conference talks ), so today I’m covering just the basics.

The naive way of transpiling could be loading code as text, applying a few gsub! -s, and writing the result into a new file. Unfortunately, that wouldn’t work even in the simplest case: for example, we can try to transpile the method reference operator ( .: ) by applying source.gsub!(/\.:(\w+)/, '.method(:\1)') . It works fine unless you have a string or a comment with “.:” inside. Thus, we need something that is  context-aware —for instance, an  abstract syntax tree .

transpiler_ast-2d286ba.png

Let me skip the theory and move right to practice: how to generate an AST from a Ruby source code?

We have multiple tools in Ruby ecosystem which could be used for generating AST, to name a few: Ripper , RubyVM::AbstractSyntaxTree , and  Parser .

The example is borrowed from one of my past table books: Learn you some Erlang for great good! .

Let’s take a look at the ASTs generated by these tools for the following example code:

# beach.rb
def beach(*temperature)
  case temperature
  in :celcius | :c, (20..45)
    :favorable
  in :kelvin | :k, (293..318)
    :scientifically_favorable
  in :fahrenheit | :f, (68..113)
    :favorable_in_us
  else
    :avoid_beach
  end
end

Ripper is a Ruby built-in tool (since 1.9) which allows you to generate symbolic expressions from the source code:

$ ruby -r ripper -e "pp Ripper.sexp(File.read('beach.rb'))"

[:program,
 [[:def,
   [:@ident, "beach", [1, 4]],
   [:paren,
    [:params,
     [:rest_param, [:@ident, "temperature", [1, 11]]],
    ],
   [:bodystmt,
    [[:case,
      [:var_ref, [:@ident, "temperature", [2, 7]]],
      [:in,
       [:aryptn,
        nil,
        [[:binary,
          [:symbol_literal, [:symbol, [:@ident, "celcius", [3, 6]]]],
          ...

Even though Ripper is cryptic, it is actively used by some Ruby hackers. For example, Penelope Phippen is building a Ruby formatter on top of it. Kevin Deisz wrote a Ruby code runtime optimizer called Preval , which is in fact a very specific transpiler using Ripper S-exps inside.

As you can see, the return value is a deeply nested array with some identifiers. One problem with Ripper is that there is no documentation on the possible “node” types and no noticeable patterns in nodes structure. More importantly, for transpiling purposes, Ripper can not parse the code for a newer Ruby from an older Ruby. We can not force developers to use the latest (and especially the edge one) Ruby just for the sake of transpiling.

The RubyVM::AbstractSyntaxTree module has been added to Ruby recently (in 2.6). It provides a much better, object-oriented AST representation but has the same problem as Ripper—it’s version specific:

$ ruby -e "pp RubyVM::AbstractSyntaxTree.parse_file('beach.rb')"

(SCOPE@1:0-14:3
  body:
   (DEFN@1:0-14:3
    mid: :beach
    body:
      (SCOPE@1:0-14:3
       tbl: [:temperature]
       args: ...
       body:
         (CASE3@2:2-13:5 (LVAR@2:7-2:18 :temperature)
            (IN@3:2-12:16
               (ARYPTN@3:5-3:28
                const: nil
                pre:
                  (LIST@3:5-3:28
                     (OR@3:5-3:18 (LIT@3:5-3:13 :celcius) (LIT@3:16-3:18 :c))
                  ...

Finally, Parser is a pure Ruby gem developed originally at Evil Martians by @whitequark :

$ gem install parser
$ ruby-parse ./beach.rb

(def :beach
  (args
    (restarg :temperature))
  (case-match
    (lvar :temperature)
    (in-pattern
      (array-pattern
        (match-alt
          (sym :celcius)
          (sym :c))
        (begin
          (irange
            (int 20)
            (int 45)))) nil
      (sym :favorable))
    ...

Unlike the former two, Parser is a version-independent tool: you can parse any Ruby code from any supported version. It has a well-designed API, some useful built-in features (e.g., source rewriting), and has been bullet-proofed by such a popular tool as RuboCop.

These benefits come at a price: it is not 100% compatible with Ruby. That means you can write a bizarre but valid Ruby code that won’t be recognized correctly by Parser.

Here is the most famous example:

<<"A#{b}C"
#{
  <<"A#{b}C"
A#{b}C
}
str
A#{b}C

#=> "\nstr\n"

Parser generates the following AST for this code:

(dstr
    (begin
      (dstr
        (str "A")
        (begin
          (send nil :b))))
    (str "\n")
    (str "str\n")
    (str "A")
    (begin
      (send nil :b))))

The problem is with the (send nil :b) nodes: Parser treats #{...} within the heredocs labels as interpolation, but it’s not.

I hope you won’t use this dark knowledge to break all the libraries relying on Parser :smiling_imp:

As you can see, no instrument is perfect. Writing a parser from scratch or trying to extract the one used by MRI was too much effort for the experimental project.

I decided to sacrifice Ruby’s weirdness in favor of productivity and went with Parser.

One more selling point for choosing Parser was the presence of the Unparser gem. As its name says, it generates a Ruby code from the Parser generated AST.

From Ruby to Ruby

The final Ruby Next code for transpiling looks like this:

def transpile(source)
  ast = Parser::Ruby27.parse(source)

  # perform the required AST modification
  new_ast = transform ast

  # return the new source code
  Unparser.unparse(new_ast)
end

Within the #transform method we pass the AST through the  rewriters pipeline :

def transform(ast)
  rewriters.inject(ast) do |tree, rewriter|
    rewriter.new.process(tree)
  end
end

Each rewriter is responsible for a single feature. Let’s take a look at the method reference operator rewriter (yeah, this proposal been reverted, but it’s perfect for demonstration purposes):

module Rewriters
  class MethodReference < Base
    def on_meth_ref(node)
      receiver, mid = *node.children

      node.updated(        # (meth-ref
        :send,             #   (const nil :C) :m)
        [                  #
          receiver,        # ->
          :method,         #
          s(:sym, mid)     # (send
        ]                  #  (const nil :C) :method
      )                    #    (sym :m)
    end
  end
end

All we do is replacing the meth-ref node with the corresponding send node. Easy-peasy!

Rewriting is not always that simple. For example, pattern matching rewriter contains more than eight hundred lines of code.

Ruby Next transpiler currently supports all Ruby 2.7 features except from beginless ranges .

Let’s take a look at the transpiled verison of our beach.rb :

def beach(*temperature)
  __m__ = temperature
  case when ((__p_1__ = (__m__.respond_to?(:deconstruct) && (((__m_arr__ = __m__.deconstruct) || true) && ((Array === __m_arr__) || Kernel.raise(TypeError, "#deconstruct must return Array"))))) && ((__p_2__ = (2 == __m_arr__.size)) && (((:celcius === __m_arr__[0]) || (:c === __m_arr__[0])) && ((20..45) === __m_arr__[1]))))
    :favorable

  when (__p_1__ && (__p_2__ && (((:kelvin === __m_arr__[0]) || (:k === __m_arr__[0])) && ((293..318) === __m_arr__[1]))))
    :scientifically_favorable
  when (__p_1__ && (__p_2__ && (((:fahrenheit === __m_arr__[0]) || (:f === __m_arr__[0])) && ((68..113) === __m_arr__[1]))))
    :favorable_in_us
  else
    :avoid_beach
  end
end

Wait, what? This is unbearable! Don’t worry; this is not the code for you to read or edit, this is a code for Ruby runtime to interpret. And machines are good at understanding such code.

Transpiled code is for machines, not humans.

However, there is one case when we want transpiled code to be as structurally close to the original as possible. By “structurally,” I mean having the same layout or line numbers.

In the example above, line 7 of the transpiled code ( :scientifically_favorable ) is different from the original ( in :fahrenheit | :f, (68..113) ).

When could this be a problem? During debugging. Debuggers, consoles (such as IRB, Pry) use original source code information, but line numbers in runtime will be different. Happy debugging :smiling_imp:!

To overcome this issue, we introduced a “rewrite” transpiling mode in  Ruby Next 0.5.0 . It uses Parser rewriting feature and applies changes to the source code in-place (the same way RuboCop autocorrection works, by the way).

Ruby Next uses “generation” (AST-to-AST-to-Ruby) transpiling mode by default since it’s faster and more predictable. In any case, the actual backported code is similar.

Performance and compatibility

One question that usually comes up: how the case-when results’ performance is compared to the original, elegant case-in ? Prepare to be surprised by the results of the  benchmark :

Comparison:
          transpiled:   1533709.6 i/s
            baseline:    923655.5 i/s - 1.66x  slower

I’m working on porting these optimizations back to MRI. Check this PR for more.

How it turned out that the transpiled code is faster than the native implementation? I added some optimizations to the pattern matching algorithm, e.g., #deconstruct value caching.

How can I be sure that these optimizations do not break the compatibility? Thank you for yet another good question.

To make sure that transpiled code (and backported polyfills) work as expected, I use RubySpec and Ruby’s own tests. That doesn’t mean that the transpiled code behaves 100% identically to the MRI code, but at least it behaves the way it’s expected to. (And to be honest, I know some weird edge cases that break compatibility, but I won’t tell you )

Run-time vs. build-time

Now when we learned about the inner workings of transpiling, it is time to answer the most intriguing question: how to integrate Ruby Next into library or application development?

Unlike front-end developers, we, Rubyists, usually do not need to “build” code (unless you’re using mruby, Opal, etc.). We just call ruby my_script.rb , and that’s it. How to inject transpiled code into the interpreter?

Ruby Next assumes two strategies depending on the nature of your code, i.e., whether you develop a gem or an application.

For applications, we provide the “run-time mode” . In this mode, every loaded ( required ) Ruby file from the application root directory is transpiled before being evaluated within the VM.

The following pseudo-code describes this process:

# Patch Kernel to hijack require
module Kernel
  alias_method :require_without_ruby_next, :require
  def require(path)
    realpath = resolve_feature_path(path)

    # only transpile application source files, not gems
    return require_without_ruby_next(path) unless RubyNext.transpilable?(realpath)

    source = File.read(realpath)

    new_source = RubyNext.transpile source
    # nothing to transpile
    return require_without_ruby_next(path) if source == new_source

    # Load code the same way as it's loaded via `require`
    RubyNext.vm_eval new_source, realpath

    true
  end
end

The actual code could be found here .

You can activate the run-time transpiling in two steps:

  • Add gem "ruby-next" to your Gemfile.
  • Add require "ruby-next/language/runtime" as early as possible during your application boot process (e.g., in  config/boot.rb for Rails projects).

If you’re afraid of using such a powerful monkey-patch from a very new library (I am ), we have you covered with the Bootsnap integration . This way, we move core patching responsibility to Shopify (they know how to do that right).

When developing a gem, you should think about many aspects of a good library (see GemCheck ), including the number of dependencies and possible side effects. Thus, enabling Ruby Next run-time mode within a gem doesn’t seem to be a good idea.

Instead, we want to make it possible to ship a gem with code for all supported Ruby versions at a time, i.e., with the pre-transpiled code. With Ruby Next, adopting this flow consists of the following steps:

  • Generate transpiled code using Ruby Next CLI ( ruby-next nextify lib/ ). That would create lib/.rbnext folder with the files required for older versions.
  • Configure Ruby’s $LOAD_PATH to look for files in the corresponding lib/.rbnext/<version> folder by calling RubyNext::Language.setup_gem_load_path in your gem’s root file ( read more ).

On one hand, that adds bloat to the resulting .gem package (some files are duplicated). On the other hand, your library users should not care about the transpiling at all. Oh, and now you can use modern Ruby in your gem!

From backporting Ruby to pushing it forward

So far, we considered Ruby Next as a syntax and API backporting tool only. To be honest, that’s what I initially was building it for, no fine print.

The paradigm shifted in November 2019, as a consequence of two events: method reference operator reversion and RubyConf, where I had an opportunity to discuss the language evaluation with many prominent Rubyists, including Matz himself.

Why so much drama around just two symbols, .: ? The situation with method reference operator is very atypical: it has been merged into master on December 31, 2018, and been reverted almost eleven months after on November 12, 2019. Blog posts were published with example usages; transpiler was written to backport this feature… Many Ruby developers found it useful and were waiting for it in 2.7 release.

The feature has been canceled for a reason . Is it OK to revert something that lived in the master for almost a year? Yes, because the feature had the  experimental status .

The proper question should be: was there an experiment? I’d say “no” because only a small portion of community members was able to taste this feature. Most developers do not build edge Ruby from source or use preview releases for applications that are more complex than “Hello, World!”

Ruby 2.7 came out with a major experimental feature—pattern matching. For an application or library developer, the word “experimental” is a red flag. In essence, there is a risk that a significant refactoring would be required if the experiment fails (remember the refinements story?). Will there be enough experimental data to assess the results and promote or revert the feature?

We need more people involved in the experiments.

Currently, mostly the people involved in Ruby development itself and a few dozens of enthusiastic Ruby hackers follow the ruby/ruby master branch and discuss proposals in the  issue tracking system .

Can we collect the feedback from more Ruby developers of different backgrounds and programming skills?

Let’s turn an eye towards the front-end development world again.

JavaScript (or more precisely, ECMAScript) specification is developed by a TC39 group : “JavaScript developers, implementers, academics, and more, collaborating with the community to maintain and evolve the definition of JavaScript.”

They have a well-defined process of introducing a new feature, which operates on in stages . There are four “maturity” stages: Proposal, Draft, Candidate, and Finished. Only the features from the last one, Finished, are included in the specification.

Features from the Proposal and Draft stages are considered experimental, and one particular tool plays a significant role in embracing these experiments— Babel .

For a long time, the requirements for the Draft stage acceptance contain the following sentence: “Two experimental implementations of the feature are needed, but one of them can be in a transpiler such as Babel.”

This post provides some details on why we need transpilers and Babel in particular.

That is, a transpiler could be used to assess experimental features.

An example of Babel demonstrates that it is a very efficient approach.

What about having something similar for Ruby development? Instead of the merge-n-revert stories, we may have The Process of accepting features based on broader feedback from transpilers users.

Ruby Next aims to be such a transpiler, the one to move Ruby forward.

I’ve started using Ruby Next in my projects recently (check out anyway_config for a Ruby gem example and  ACLI for mruby). It is no longer an experimental project but still at the very beginning of its open-source journey.

The easiest way to give Ruby Next (and new Ruby features) a try is to use a -ruby-next option for ruby command:

$ ruby -v
2.5.3p105

# Gem must be installed
$ gem install ruby-next

$ ruby -ruby-next -e "
def greet(val) =
  case val
    in hello: hello if hello =~ /human/i
      '  '
    in hello: 'martian'
      ':alien:'
    end

greet(hello: 'martian') => greeting
puts greeting
"

:alien:

As you can see, Ruby Next already supports endless method defintion and  right-hand assignment , two new experimental features. Will they reach the final 3.0 release? Not sure, but I want to give them a try and provide feedback to the Ruby Core team. What about you?

I want Ruby Next to play an essential role in the adoption of new features and language evolution. Today, you can help us take a step towards this future by using Ruby Next in your library or application development. Also, feel free to  drop me a line regardless of what you think about the idea as a whole!

As usual, don’t hesitate tocontact Evil Martians, if you want me or my colleagues to give you a hand with application development.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK