The Koka Programming Language
source link: https://koka-lang.github.io/koka/doc/book.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
1. Getting started
Welcome to Koka – a strongly typed functional-style language with effect types and handlers.
Why Koka? A Tour of Koka Install Discussion forum Github Libraries
Note: Koka v2 is a research language that is currently under development and not ready for production use. Nevertheless, the language is stable and the compiler implements the full specification. The main things lacking at the moment are libraries, package management, and deep IDE integration.
News:
-
2021-02-04 (pinned) The Context Free youtube channel posted a short and fun video about effects in Koka (and 12 (!) other languages).
-
2021-09-01 (pinned) The ICFP'21 tutorial “Programming with Effect Handlers and FBIP in Koka” is now available on youtube.
-
2022-02-07: Koka v2.4.0 released: improved specialization and
int
operations, addrbtree-fbip
sample, improve grammar (pub
(instead ofpublic
, remove private (as everything is private by default now)),final ctl
(instead ofbrk
), underscores in number literals, etc), renamedouble
tofloat64
, various bug fixes. -
2021-12-27: Koka v2.3.8 released: improved
int
performance, various bug fixes, update wasm backend, initial conan support, fix js backend. -
2021-11-26: Koka v2.3.6 released:
maybe
-like types are already value types, but now also no longer need heap allocation if not nested (and[Just(1)]
uses the same heap space as[1]
), improved atomic refcounting (by Anton Lorenzen), improved specialization (by Steven Fontanella), various small fixes, addstd/os/readline
, fix build on freeBSD -
2021-10-15: Koka v2.3.2 released, with initial wasm support (use
--target=wasm
, and install emscripten and wasmtime), improved reuse specialization (by Anton Lorenzen), and various bug fixes. -
2021-09-29: Koka v2.3.1 released, with improved TRMC optimizations, and improved reuse (the rbtree benchmark is as fast as C++ now), and faster effect operations. Experimental: allow elision of
->
in anonymous function expressions (e.g.xs.map( fn(x) x + 1 )
) and operation clauses. Command line options changed a bit with.koka
as the standard output directory. -
2021-09-20: Koka v2.3.0 released, with new brace elision and if/match conditions without parenthesis. Updated the javascript backend using ES6 modules and BigInt. new
module std/num/int64
, improved effect operation performance. -
2021-09-05: Koka v2.2.1 released, with initial parallel tasks, the binary-trees benchmark, and brace elision.
-
2021-08-26: Koka v2.2.0 released, improved simplification (by Rashika B), cross-module specialization (Steven Fontanella), and borrowing annotations with improved reuse analysis (Anton Lorenzen).
-
2021-08-26: At 12:30 EST was the live Koka tutorial at ICFP'21, see it on youtube.
-
2021-08-23: “Generalized Evidence Passing for Effect Handlers”, by Ningning Xie and Daan Leijen presented at ICFP'21. See it on youtube or read the paper.
-
2021-08-22: “First-class Named Effect Handlers”, by Youyou Cong, Ningning Xie, and Daan Leijen presented at HOPE'21. See it on youtube or read the paper.
-
2021-06-23: Koka v2.1.9 released, initial cross-module specialization (by Steven Fontanella).
-
2021-06-17: Koka v2.1.8 released, initial Apple M1 support.
-
The Perceus paper won a distinguished paper award at PLDI'21!
-
2021-06-10: Koka v2.1.6 released.
-
2021-05-31: Koka v2.1.4 released.
-
2021-05-01: Koka v2.1.2 released.
-
2021-03-08: Koka v2.1.1 released.
-
2021-02-14: Koka v2.0.16 released.
-
2020-12-12: Koka v2.0.14 released.
-
2020-12-02: Koka v2.0.12 released.
-
2020-11-29: Perceus technical report publised (pdf).
1.1. Installing the compiler
On macOS (x64, M1), you can install and upgrade Koka using Homebrew:
brew install koka
On Windows (x64), open a cmd
prompt and use:
curl -sSL -o %tmp%\install-koka.bat https://github.com/koka-lang/koka/releases/latest/download/install.bat && %tmp%\install-koka.bat
On Linux (x64, arm64) and FreeBSD (x64) (and macOS), you can install Koka using:
curl -sSL https://github.com/koka-lang/koka/releases/latest/download/install.sh | sh
There are also installation packages for various Linux distributions: Ubuntu/Debian (x64, arm64), Alpine (x64, arm64), Arch (x64, arm64), Red Hat (x64), and openSUSE (x64).
After installation, verify if Koka installed correctly:
$ koka
_ _
| | | |
| | _ ___ | | _ __ _
| |/ / _ \| |/ / _' | welcome to the koka interactive compiler
| ( (_) | ( (_| | version 2.4.0, Feb 7 2022, libc x64 (gcc)
|_|\_\___/|_|\_\__,_| type :? for help, and :q to quit
loading: std/core
loading: std/core/types
loading: std/core/hnd
>
Type :q
to exit the interactive environment.
For detailed installation instructions and other platforms see the releases page. It is also straightforward to build the compiler from source.
1.2. Running the compiler
You can compile a Koka source as (note that all samples
are pre-installed):
$ koka samples/basic/caesar.kk
compile: samples/basic/caesar.kk
loading: std/core
loading: std/core/types
loading: std/core/hnd
loading: std/num/float64
loading: std/text/parse
loading: std/num/int32
check : samples/basic/caesar
linking: samples_basic_caesar
created: .koka/v2.3.1/gcc-debug/samples_basic_caesar
and run the resulting executable:
$ .koka/v2.3.1/gcc-debug/samples_basic_caesar
plain : Koka is a well-typed language
encoded: Krnd lv d zhoo-wbshg odqjxdjh
cracked: Koka is a well-typed language
The -O2
flag builds an optimized program. Let's try it on a purely functional implementation
of balanced insertion in a red-black tree (rbtree.kk
):
$ koka -O2 -o kk-rbtree samples/basic/rbtree.kk
...
linking: samples_basic_rbtree
created: .koka/v2.3.1/gcc-drelease/samples_basic_rbtree
created: kk-rbtree
$ time ./kk-rbtree
420000
real 0m0.626s
...
(On Windows you can give the --kktime
option to see the elapsed time).
We can compare this against an in-place updating C++ implementation using stl::map
(rbtree.cpp
) (which also uses a
red-black tree internally):
$ clang++ --std=c++17 -o cpp-rbtree -O3 /usr/local/share/koka/v2.3.1/lib/samples/basic/rbtree.cpp
$ time ./cpp-rbtree
420000
real 0m0.667s
...
The excellent performance relative to C++ here (on Ubuntu 20.04 with an AMD 5950X) is the result of Perceus automatically transforming the fast path of the pure functional rebalancing to use mostly in-place updates, closely mimicking the imperative rebalancing code of the hand optimized C++ library.
1.3. Running the interactive compiler
Without giving any input files, the interactive environment runs by default:
$ koka
_ _
| | | |
| | _ ___ | | _ __ _
| |/ / _ \| |/ / _' | welcome to the koka interactive compiler
| ( (_) | ( (_| | version 2.3.1, Sep 21 2021, libc x64 (clang-cl)
|_|\_\___/|_|\_\__,_| type :? for help, and :q to quit
loading: std/core
loading: std/core/types
loading: std/core/hnd
>
Now you can test some expressions:
> println("hi koka")
check : interactive
check : interactive
linking: interactive
created: .koka\v2.3.1\clang-cl-debug\interactive
hi koka
> :t "hi"
string
> :t println("hi")
console ()
Or load a demo (use tab
completion to avoid typing too much):
> :l samples/basic/fibonacci
compile: samples/basic/fibonacci.kk
loading: std/core
loading: std/core/types
loading: std/core/hnd
check : samples/basic/fibonacci
modules:
samples/basic/fibonacci
> main()
check : interactive
check : interactive
linking: interactive
created: .koka\v2.3.1\clang-cl-debug\interactive
The 10000th fibonacci number is 33644764876431783266621612005107543310302148460680063906564769974680081442166662368155595513633734025582065332680836159373734790483865268263040892463056431887354544369559827491606602099884183933864652731300088830269235673613135117579297437854413752130520504347701602264758318906527890855154366159582987279682987510631200575428783453215515103870818298969791613127856265033195487140214287532698187962046936097879900350962302291026368131493195275630227837628441540360584402572114334961180023091208287046088923962328835461505776583271252546093591128203925285393434620904245248929403901706233888991085841065183173360437470737908552631764325733993712871937587746897479926305837065742830161637408969178426378624212835258112820516370298089332099905707920064367426202389783111470054074998459250360633560933883831923386783056136435351892133279732908133732642652633989763922723407882928177953580570993691049175470808931841056146322338217465637321248226383092103297701648054726243842374862411453093812206564914032751086643394517512161526545361333111314042436854805106765843493523836959653428071768775328348234345557366719731392746273629108210679280784718035329131176778924659089938635459327894523777674406192240337638674004021330343297496902028328145933418826817683893072003634795623117103101291953169794607632737589253530772552375943788434504067715555779056450443016640119462580972216729758615026968443146952034614932291105970676243268515992834709891284706740862008587135016260312071903172086094081298321581077282076353186624611278245537208532365305775956430072517744315051539600905168603220349163222640885248852433158051534849622434848299380905070483482449327453732624567755879089187190803662058009594743150052402532709746995318770724376825907419939632265984147498193609285223945039707165443156421328157688908058783183404917434556270520223564846495196112460268313970975069382648706613264507665074611512677522748621598642530711298441182622661057163515069260029861704945425047491378115154139941550671256271197133252763631939606902895650288268608362241082050562430701794976171121233066073310059947366875
You can also set command line options in the interactive environment using :set <options>
.
For example, we can load the rbtree
example again and print out the elapsed runtime with --showtime
:
> :set --showtime
> :l samples/basic/rbtree.kk
...
linking: interactive
created: .koka\v2.3.1\clang-cl-debug\interactive
> main()
...
420000
info: elapsed: 4.104s, user: 4.046s, sys: 0.062s, rss: 231mb
and then enable optimizations with -O2
and run again (on Windows with an AMD 5950X):
> :set -O2
> :r
...
linking: interactive
created: .koka\v2.3.1\clang-cl-drelease\interactive
> main()
...
420000
info: elapsed: 0.670s, user: 0.656s, sys: 0.015s, rss: 198mb
And finally we quit the interpreter:
> :q
I think of my body as a side effect of my mind.
-- Carrie Fisher (1956)
1.4. Samples and Editors
The samples/syntax
and samples/basic
directories contain various basic Koka examples to start with. If you type:
> :l samples/
in the interpreter, you can tab
twice to see the available sample files and directories.
Use :s
to see the source of a loaded module.
If you use VS Code or Atom (or if you set the koka_editor
environment variable manually),
you can type :e
in the interactive prompt to edit your program further. For example,
> :l samples/basic/caesar
...
modules:
samples/basic/caesar
> :e
<edit the source and reload>
> :r
...
modules:
samples/basic/caesar
> main()
What next?
Basic Koka syntax Browse the Library documentation
2. Why Koka?
There are many new languages being designed, but only few bring fundamentally new concepts – like Haskell with pure versus monadic programming, or Rust with borrow checking. Koka distinguishes itself through effect typing, effect handlers, and Perceus memory management:
Minimal but GeneralThe core of Koka consists of a small set of well-studied language features, like first-class functions, a polymorphic type- and effect system, algebraic data types, and effect handlers. Each of these is composable and avoid the addition of “special” extensions by being as general as possible.
Effect TypesKoka tracks the (side) effects of every function in its type, where pure and effectful computations are distinguished. The precise effect typing gives Koka rock-solid semantics backed by well-studied category theory, which makes Koka particularly easy to reason about for both humans and compilers.
Effect HandlersEffect handlers let you define advanced control abstractions, like exceptions, async/await, or probabilistic programs, as a user library in a typed and composable way.
Perceus Reference CountingPerceus is an advanced compilation method for reference counting. This lets Koka compile directly to C code without needing a garbage collector or runtime system! This also gives Koka excellent performance in practice.
Reuse AnalysisThrough Perceus, Koka can do reuse analysis and optimize functional-style programs to use in-place updates.
2.1. Minimal but General
Koka has a small core set of orthogonal, well-studied language features – but each of these is as general and composable as possible, such that we do not need further “special” extensions. Core features include first-class functions, a higher-rank impredicative polymorphic type- and effect system, algebraic data types, and effect handlers.
fun hello-ten() var i := 0 while { i < 10 } println("hello") i := i + 1
As an example of the min-gen design principle, Koka implements most
control-flow primitives as regular functions. An anonymous function can
be written as fn(){ <body> }
; but as a syntactic convenience, any
function without arguments can be shortened further to use just braces,
as { <body> }
. Moreover, using brace elision, any
indented block automatically gets curly braces.
We can write a while
loop now using regular
function calls as shown in the example,
where the call to while
is desugared to
while( fn(){ i < 10 }, fn(){ ... } )
.
This also naturally leads to
consistency: an expression between parenthesis is always evaluated
before a function call, whereas an expression between braces (ah,
suspenders!) is suspended and may be never evaluated or more than once
(as in our example). This is inconsistent in most other languages where
often the predicate of a while
loop is written in parenthesis but may
be evaluated multiple times.
2.2. Effect Typing
Koka infers and tracks the effect of every function in its type – and a function type has 3 parts: the argument types, the effect type, and the type of the result. For example:
fun sqr : (int) -> total int // total: mathematical total function fun divide : (int,int) -> exn int // exn: may raise an exception (partial) fun turing : (tape) -> div int // div: may not terminate (diverge) fun print : (string) -> console () // console: may write to the console fun rand : () -> ndet int // ndet: non-deterministic
The precise effect typing gives Koka rock-solid semantics and deep safety guarantees backed by well-studied category theory, which makes Koka particularly easy to reason about for both humans and compilers. (Given the importance of effect typing, the name Koka was derived from the Japanese word for effective (効果, こうか, Kōka)).
A function without any effect is called total
and corresponds to
mathematically total functions – a good place to be. Then we have
effects for partial functions that can raise exceptions (exn
), and
potentially non-terminating functions as div
(divergent). The
combination of exn
and div
is called pure
as that corresponds to
Haskell's notion of purity. On top of that we find mutability (as st
)
up to full non-deterministic side effects in io
.
Effects can be polymorphic as well. Consider mapping a function over a list:
fun map( xs : list<a>, f : a -> e b ) : e list<b> match xs Cons(x,xx) -> Cons( f(x), map(xx,f) ) Nil -> Nil
Single letter types are polymorphic (aka, generic), and Koka infers
that you map from a list of elements a
to a list of elements of
type b
. Since map
itself has no intrinsic effect, the effect
of applying map
is exactly the effect of the function f
that
is applied, namely e
.
2.3. Effect Handlers
Another example of the min-gen design principle: instead of various special language and compiler extensions to support exceptions, generators, async/await etc., Koka has full support for algebraic effect handlers – these lets you define advanced control abstractions like async/await as a user library in a typed and composable way.
Here is an example of an effect definition with
one control (ctl
) operation to yield int
values:
effect yield ctl yield( i : int ) : bool
Once the effect is declared, we can use it for example to yield the elements of a list:
fun traverse( xs : list<int> ) : yield () match xs Cons(x,xx) -> if yield(x) then traverse(xx) else () Nil -> ()
The traverse
function calls yield
and therefore gets the yield
effect in its type,
and if we want to use traverse
, we need to handle the yield
effect.
This is much like defining an exception handler, except we can receive values (here an int
),
and we can resume with a result (which determines if we keep traversing):
fun print-elems() : console () with ctl yield(i) println("yielded " ++ i.show) resume(i<=2) traverse([1,2,3,4])
The with
statement binds the handler for yield
control operation over the
rest of the scope, in this case traverse([1,2,3,4])
.
Every time yield
is called, our control handler is called, prints the current value,
and resumes to the call site with a boolean result (indeed, dynamic binding with static typing!).
Note how the handler discharges the yield
effect – and replaces
it with a console
effect. When we run the example, we get:
yielded: 1
yielded: 2
yielded: 3
Learn more about with
statements
Learn more about effect handlers
2.4. Perceus Optimized Reference Counting
Perceus is the compiler optimized reference counting technique that Koka uses for automatic memory management [11, 18]. This (together with evidence passing [19–21]) enables Koka to compile directly to plain C code without needing a garbage collector or runtime system.
Perceus uses extensive static analysis to aggressively optimize the reference counts. Here the strong semantic foundation of Koka helps a lot: inductive data types cannot form cycles, and potential sharing across threads can be reliably determined.
Normally we need to make a fundamental choice when managing memory:
- We either use manual memory management (C, C++, Rust) and we get the best performance but at a significant programming burden,
- Or, we use garbage collection (OCaml, C#, Java, Go, etc.) but but now we need a runtime system and pay a price in performance, memory usage, and unpredictable latencies.
With Perceus, we hope to cross this gap and our goal is to be within 2x of the performance of C/C++. Initial benchmarks are encouraging and show Koka to be close to C performance on various memory intensive benchmarks.
Read the Perceus technical report
2.5. Reuse Analysis
Perceus also performs reuse analysis as part of reference
counting analysis. This pairs pattern matches with constructors of the
same size and reuses them in-place if possible. Take for example,
the map
function over lists:
fun map( xs : list<a>, f : a -> e b ) : e list<b> match xs Cons(x,xx) -> Cons( f(x), map(xx,f) ) Nil -> Nil
Here the matched Cons
can be reused by the new Cons
in the branch. This means if we map over a list that is not shared,
like list(1,100000).map(sqr).sum
,
then the list is updated in-place without any extra allocation.
This is very effective for many functional style programs.
void map( list_t xs, function_t f,
list_t* res)
{
while (is_Cons(xs)) {
if (is_unique(xs)) { // if xs is not shared..
box_t y = apply(dup(f),xs->head);
if (yielding()) { ... } // if f yields to a general ctl operation..
else {
xs->head = y;
*res = xs; // update previous node in-place
res = &xs->tail; // set the result address for the next node
xs = xs->tail; // .. and continue with the next node
}
}
else { ... } // slow path allocates fresh nodes
}
*res = Nil;
}
Moreover, the Koka compiler also implements tail-recursion modulo cons (TRMC) and instead of using a recursive call, the function is eventually optimized into an in-place updating loop for the fast path, similar to the C code example on the right.
Importantly, the reuse optimization is guaranteed and a programmer can see when the optimization applies. This leads to a new programming technique we call FBIP: functional but in-place. Just like tail-recursion allows us to express loops with regular function calls, reuse analysis allows us to express many imperative algorithms in a purely functional style.
2.6. Specialization
As another example of the effectiveness of Perceus and the strong semantics of the Koka core language, we can look at the red-black tree example and look at the code generated when folding a binary tree. The red-black tree is defined as:
type color Red Black type tree<k,a> Leaf Node(color : color, left : tree<k,a>, key : k, value : a, right : tree<k,a>)
We can generically fold over a tree t
with a function f
as:
fun fold(t : tree<k,a>, acc : b, f : (k, a, b) -> b) : b match t Node(_,l,k,v,r) -> r.fold( f(k,v,l.fold(acc,f)), f) Leaf -> acc
This is used in the example to count all the True
values in
a tree t : tree<k,bool>
as:
val count = t.fold(0, fn(k,v,acc) if v then acc+1 else acc)
This may look quite expensive where we pass a polymorphic
first-class function that uses arbitrary precision integer arithmetic.
However, the Koka compiler first specializes the fold
definition
to the passed function, then simplifies the resulting monomorphic code,
and finally applies Perceus to insert reference count instructions.
This results in the following internal core code:
fun spec-fold(t : tree<k,bool>, acc : int) : int match t Node(_,l,k,v,r) -> if unique(t) then { drop(k); free(t) } else { dup(l); dup(r) } // perceus inserted val x = if v then 1 else 0 spec-fold(r, spec-fold(l,acc) + x) Leaf -> drop(t) acc val count = spec-fold(t,0)
When compiled via the C backend, the generated assembly instructions on arm64 become:
spec_fold:
...
LBB15_3:
mov x21, x0 ; x20 is t, x21 = acc (x19 = koka context _ctx)
LBB15_4: ; the "match(t)" point
cmp x20, #9 ; is t a Leaf?
b.eq LBB15_1 ; if so, goto Leaf brach
LBB15_5: ; otherwise, this is the Cons(_,l,k,v,r) branch
mov x23, x20 ; load the fields of t:
ldp x22, x0, [x20, #8] ; x22 = l, x0 = k (ldp == load pair)
ldp x24, x20, [x20, #24] ; x24 = v, x20 = r
ldr w8, [x23, #4] ; w8 = reference count (0 is unique)
cbnz w8, LBB15_11 ; if t is not unique, goto cold path to dup the members
tbz w0, #0, LBB15_13 ; if k is allocated (bit 0 is 0), goto cold path to free it
LBB15_7:
mov x0, x23 ; call free(t)
bl _mi_free
LBB15_8:
mov x0, x22 ; call spec_fold(l,acc,_ctx)
mov x1, x21
mov x2, x19
bl spec_fold
cmp x24, #1 ; boxed value is False?
b.eq LBB15_3 ; if v is False, the result in x0 is the accumulator
add x21, x0, #4 ; otherwise add 1 (as a small int 4*n)
orr x8, x21, #1 ; check for bigint or overflow in one test
cmp x8, w21, sxtw ; (see kklib/include/integer.h for details)
b.eq LBB15_4 ; and tail-call into spec_fold if no overflow or bigint
mov w1, #5 ; otherwise, use generic bigint addition
mov x2, x19
bl _kk_integer_add_generic
b LBB15_3
...
The polymorphic fold
with its higher order parameter
is eventually compiled into a tight loop with close to optimal
assembly instructions.
advancedHere we see too that the node t
is freed explicitly as soon as it is
no longer live. This is usually earlier than scope-based deallocation
(like RAII) and therefore Perceus can guarantee to be garbage-free
where in a (cycle-free) program objects are always immediatedly
deallocated as soon as they become unreachable [11, 18].
Moreover, it is fully deterministic and behaves just like regular
malloc/free calls.
Reference counting may still seem expensive compared to trace-based garbage collection
which only (re)visits all live objects and never needs to free objects
explicitly. However, Perceus usually frees an object right after its
last use (like in our example), and thus the memory is still in the
cache reducing the cost of freeing it. Also, Perceus never (re)visits
live objects arbitrarily which may trash the caches especially if the
live set is large. As such, we think the deterministic behavior of
Perceus together with the garbage-free property may work out better
in practice.
Read the technical report on garbage-free and frame-limited reuse
3. A Tour of Koka
This is a short introduction to the Koka programming language.
Koka is a function-oriented language that separates pure values from
side-effecting computations (The word ‘kōka’ (or 効果) means
“effect” or “effective” in Japanese). Koka is also
flexible and fun
: Koka has many features that help programmers to easily
change their data types and code organization correctly even in large-scale
programs, while having a small strongly-typed language core with a familiar
brace syntax.
3.1. Basics
3.1.1. Hello world
As usual, we start with the familiar Hello world program:
fun main() println("Hello world!") // println output
Functions are declared using the fun
keyword (and anonymous functions with fn
).
Due to brace elision, any indented blocks implicitly get curly braces,
and the example can also be written as:
fun main() { println("Hello world!") // println output }
using explicit braces. Here is another short example program that encodes a string using the Caesar cipher, where each lower-case letter in a string is replaced by the letter three places up in the alphabet:
fun encode( s : string, shift : int ) fun encode-char(c) if c < 'a' || c > 'z' then return c val base = (c - 'a').int val rot = (base + shift) % 26 (rot.char + 'a') s.map(encode-char) fun caesar( s : string ) : string s.encode( 3 )
In this example, we declare a local function encode-char
which encodes a
single character c
. The final statement s.map(encode-char)
applies the
encode-char
function to each character in the string s
, returning a
new string where each character is Caesar encoded. The result of the final
statement in a function is also the return value of that function, and you can
generally leave out an explicit return
keyword.
3.1.2. Dot selection
Koka is a function-oriented language where functions and data form the
core of the language (in contrast to objects for example). In particular, the
expression s.encode(3)
does not select the encode
method from the
string
object, but it is simply syntactic sugar for the function call
encode(s,3)
where s
becomes the first argument. Similarly, c.int
converts a character to an integer by calling int(c)
(and both expressions
are equivalent). The dot notation is intuïtive and quite convenient to
chain multiple calls together, as in:
fun showit( s : string ) s.encode(3).count.println
for example (where the body desugars as println(count(encode(s,3)))
). An
advantage of the dot notation as syntactic sugar for function calls is that it
is easy to extend the ‘primitive’ methods of any data type: just write a new
function that takes that type as its first argument. In most object-oriented
languages one would need to add that method to the class definition itself
which is not always possible if such class came as a library for example.
3.1.3. Type Inference
Koka is also strongly typed. It uses a powerful type inference engine to
infer most types, and types generally do not get in the way. In
particular, you can always leave out the types of any local variables.
This is the case for example for base
and rot
values in the
previous example; hover with the mouse over the example to see the types
that were inferred by Koka. Generally, it is good practice though to
write type annotations for function parameters and the function result
since it both helps with type inference, and it provides useful
documentation with better feedback from the compiler.
For the encode
function it is actually essential to give the type of
the s
parameter: since the map
function is defined for both list
and string
types and the program is ambiguous without an annotation.
Try to load the example in the editor and remove the annotation to see
what error Koka produces.
3.1.4. Anonymous Functions and Trailing Lambdas
Koka also allows for anonymous function expressions using the fn
keyword.
For example, instead of
declaring the encode-char
function, we can also pass it directly to
the map
function as a function expression:
fun encode2( s : string, shift : int ) s.map( fn(c) if c < 'a' || c > 'z' then return c val base = (c - 'a').int val rot = (base + shift) % 26 (rot.char + 'a') )
It is a bit annoying we had to put the final right-parenthesis after the last
brace in the previous example. As a convenience, Koka allows anonymous functions to follow
the function call instead – this is also known as trailing lambdas.
For example, here is how we can print the numbers
1
to 10
:
fun print10() for(1,10) fn(i) { println(i) }
which is desugared to for( 1, 10, fn(i){ println(i) } )
. (In fact, since we
pass the i
argument directly to println
, we could have also passed the function itself
directly, and write for(1,10,println)
.)
Anonymous functions without any arguments can be shortened further by leaving
out the fn
keyword as well and just use braces directly. Here is an example using
the repeat
function:
fun printhi10() repeat(10) { println("hi") }
where the body desugars to repeat( 10, fn(){println(
hi
)} )
. The is
especially convenient for the while
loop since this is not a built-in
control flow construct but just a regular function:
fun print11() var i := 10 while { i >= 0 } { println(i) i := i - 1 }
Note how the first argument to while
is in braces instead of the usual
parenthesis. In Koka, an expression between parenthesis is always evaluated
before a function call, whereas an expression between braces (ah,
suspenders!) is suspended and may be never evaluated or more than once
(as in our example).
Of course, the previous examples can also use identation and elide the braces (see Section 4.3), and a more typical way of writing these is:
fun printhi10() repeat(10) println("hi") fun print11() var i := 10 while { i >= 0 } println(i) i := i - 1
3.1.5. With Statements
To the best of our knowledge, Koka was the first language to have
generalized trailing lambdas. It was also one of the first languages
to have dot notation (This was independently developed but it turns out
the D language has a similar feature (called UFCS) which predates dot-notation). Another novel
syntactical feature is the with
statement.
With the ease of passing a function block as a parameter, these
often become nested. For example:
fun twice(f) f() f() fun test-twice() twice twice println("hi")
where "hi"
is printed four times (note: this desugars
to twice( fn(){ twice( fn(){ println("hi") }) })
).
Using the with
statement
we can write this more concisely as:
pub fun test-with1() with twice with twice println("hi")
The with
statement essentially puts all statements that follow it into
an anynomous function block and passes that as the last parameter. In general:
translation
with f(e1,...,eN) <body>
$\mathpre{\rightsquigarrow}$
f(e1,...,eN, fn(){ <body> })
Moreover, a with
statement can also bind a variable parameter as:
translation
with x <- f(e1,...,eN) <body>
$\mathpre{\rightsquigarrow}$
f(e1,...,eN, fn(x){ <body> })
Here is an example using foreach
to span over the rest of the function body:
pub fun test-with2() { with x <- list(1,10).foreach println(x) }
which desugars to list(1,10).foreach( fn(x){ println(x) } )
.
This is a bit reminiscent of Haskell do
notation.
Using the with
statement this way may look a bit strange at first
but is very convenient in practice – it helps thinking of with
as
a closure over the rest of the lexical scope.
With Finally
As another example, the finally
function takes as its first argument a
function that is run when exiting the scope – either normally,
or through an “exception” (i.e. when an effect operation does not resume).
Again, with
is a natural fit:
fun test-finally() with finally{ println("exiting..") } println("entering..") throw("oops") + 42
which desugars to finally(fn(){ println... }, fn(){ println("entering"); throw("oops") + 42 })
,
and prints:
entering..
exiting..
uncaught exception: oops
This is another example of the min-gen principle: many languages have
have special built-in support for this kind of pattern, like a defer
statement, but in Koka
it is all just function applications with minimal syntactic sugar.
Read more about initially and finally handlers
With Handlers
The with
statement is especially useful in combination with
effect handlers. An effect describes an abstract set of operations
whose concrete implementation can be supplied by a handler.
Here is an example of an effect handler for emitting messages:
// Emitting messages; how to emit is TBD. Just one abstract operation: emit. effect fun emit(msg : string) : () // Emits a standard greeting. fun hello() emit("hello world!") // Emits a standard greeting to the console. pub fun hello-console1() with handler fun emit(msg) println(msg) hello()
In this example, the with
expression desugars to (handler{ fun emit(msg){ println(msg) } })( fn(){ hello() } )
.
Generally, a handler{ <ops> }
expression takes
as its last argument a function block so it can be used directly with with
.
Moreover, as a convenience, we can leave out the handler
keyword
for effects that define just one operation (like emit
):
translation
with val op = <expr> with fun op(x){ <body> } with ctl op(x){ <body> }
$\mathpre{\rightsquigarrow}$
with handler{ val op = <expr> } with handler{ fun op(x){ <body> } } with handler{ ctl op(x){ <body> } }
Using this convenience, we can write the previous example in a more concise and natural way as:
pub fun hello-console2() with fun emit(msg) println(msg) hello()
Intuitively, we can view the handler with fun emit
as a dynamic binding of the function emit
over the rest of the scope.
Read more about effect handlers
Read more about val
operations
3.1.6. Optional and Named Parameters
Being a function-oriented language, Koka has powerful support for function
calls where it supports both optional and named parameters. For example, the
function replace-all
takes a string, a pattern (named pattern
), and
a replacement string (named repl
):
fun world() replace-all("hi there", "there", "world") // returns "hi world"
Using named parameters, we can also write the function call as:
fun world2() "hi there".replace-all( repl="world", pattern="there" )
Optional parameters let you specify default values for parameters that do not
need to be provided at a call-site. As an example, let's define a function
sublist
that takes a list, a start
position, and the length len
of the desired
sublist. We can make the len
parameter optional and by default return all
elements following the start
position by picking the length of the input list by
default:
fun sublist( xs : list<a>, start : int, len : int = xs.length ) : list<a> if start <= 0 return xs.take(len) match xs Nil -> Nil Cons(_,xx) -> xx.sublist(start - 1, len)
Hover over the sublist
identifier to see its full type, where the len
parameter has gotten an optional int
type signified by the question mark:
:?int
.
3.1.7. A larger example: cracking Caesar encoding
// The letter frequency table for English val english = [8.2,1.5,2.8,4.3,12.7,2.2, 2.0,6.1,7.0,0.2,0.8,4.0,2.4, 6.7,7.5,1.9,0.1, 6.0,6.3,9.1, 2.8,1.0,2.4,0.2,2.0,0.1] // Small helper functions fun percent( n : int, m : int ) 100.0 * (n.float64 / m.float64) fun rotate( xs, n ) xs.drop(n) ++ xs.take(n) // Calculate a frequency table for a string fun freqs( s : string ) : list<float64> val lowers = list('a','z') val occurs = lowers.map( fn(c) s.count(c.string) ) val total = occurs.sum occurs.map( fn(i) percent(i,total) ) // Calculate how well two frequency tables match according // to the _chi-square_ statistic. fun chisqr( xs : list<float64>, ys : list<float64> ) : float64 zipwith(xs,ys, fn(x,y) ((x - y)^2.0)/y ).foldr(0.0,(+)) // Crack a Caesar encoded string fun uncaesar( s : string ) : string val table = freqs(s) // build a frequency table for `s` val chitab = list(0,25).map fn(n) // build a list of chisqr numbers for each shift between 0 and 25 chisqr( table.rotate(n), english ) val min = chitab.minimum() // find the mininal element val shift = chitab.index-of( fn(f) f == min ).negate // and use its position as our shift s.encode( shift ) fun test-uncaesar() println( uncaesar( "nrnd lv d ixq odqjxdjh" ) )
The val
keyword declares a static value. In the example, the value english
is a list of floating point numbers (of type float64
) denoting the average
frequency for each letter. The function freqs
builds a frequency table for a
specific string, while the function chisqr
calculates how well two frequency
tables match. In the function crack
these functions are used to find a
shift
value that results in a string whose frequency table matches the
english
one the closest – and we use that to decode the string.
You can try out this example directly in the interactive environment:
> :l samples/basic/caesar.kk
3.2. Effect types
A novel part about Koka is that it automatically infers all the side effects
that occur in a function. The absence of any effect is denoted as total
(or
<>
) and corresponds to pure mathematical functions. If a function can raise
an exception the effect is exn
, and if a function may not terminate the
effect is div
(for divergence). The combination of exn
and div
is
pure
and corresponds directly to Haskell's notion of purity. Non-
deterministic functions get the ndet
effect. The ‘worst’ effect is io
and means that a program can raise exceptions, not terminate, be non-
deterministic, read and write to the heap, and do any input/output operations.
Here are some examples of effectful functions:
fun square1( x : int ) : total int { x*x } fun square2( x : int ) : console int { println( "a not so secret side-effect" ); x*x } fun square3( x : int ) : div int { x * square3( x ) } fun square4( x : int ) : exn int { throw( "oops" ); x*x }
When the effect is total
we usually leave it out in the type annotation.
For example, when we write:
fun square5( x : int ) : int x*x
the assumed effect is total
. Sometimes, we write an effectful
function, but are not interested in explicitly writing down its effect type.
In that case, we can use a wildcard type which stands for some inferred
type. A wildcard type is denoted by writing an identifier prefixed with an
underscore, or even just an underscore by itself:
fun square6( x : int ) : _e int println("I did not want to write down the \"console\" effect") x*x
Hover over square6
to see the inferred effect for _e
.
3.2.1. Semantics of effects
The inferred effects are not just considered as some extra type information on
functions. On the contrary, through the inference of effects, Koka has a very
strong connection to its denotational semantics. In particular, the full type
of a Koka functions corresponds directly to the type signature of the
mathematical function that describes its denotational semantics. For example,
using 〚t
〛 to translate a type t
into its corresponding
mathematical type signature, we have:
〚int -> total int 〛 | = | $\mathpre{\mathbb{Z}~\rightarrow \mathbb{Z}}$ |
〚int -> exn int 〛 | = | $\mathpre{\mathbb{Z}~\rightarrow (\mathbb{Z}~+~1)}$ |
〚int -> pure int 〛 | = | $\mathpre{\mathbb{Z}~\rightarrow (\mathbb{Z}~+~1)_\bot}$ |
〚int -> <st<h>,pure> int 〛 | = | $\mathpre{(\mathbb{Z}~\times \mathbb{H})~\rightarrow ((\mathbb{Z}~+~1)~\times \mathbb{H})_\bot}$ |
In the above translation, we use $\mathpre{1~+~\tau}$ as a sum
where we have either a unit $\mathpre{1}$ (i.e. exception) or a type $\mathpre{\tau}$, and we use
$\mathpre{\mathbb{H}\times \tau}$ for a product consisting of a pair of a
heap and a type $\mathpre{\tau}$. From the above correspondence, we can immediately see that
a total
function is truly total in the mathematical sense, while a stateful
function (st<h>
) that can raise exceptions or not terminate (pure
)
takes an implicit heap parameter, and either does not terminate ($\mathpre{\bot}$) or
returns an updated heap together with either a value or an exception ($\mathpre{1}$).
We believe that this semantic correspondence is the true power of full effect types and it enables effective equational reasoning about the code by a programmer. For almost all other existing programming languages, even the most basic semantics immediately include complex effects like heap manipulation and divergence. In contrast, Koka allows a layered semantics where we can easily separate out nicely behaved parts, which is essential for many domains, like safe LINQ queries, parallel tasks, tier-splitting, sand-boxed mobile code, etc.
3.2.2. Combining effects
Often, a function contains multiple effects, for example:
fun combine-effects() val i = srandom-int() // non-deterministic throw("oops") // exception raising combine-effects() // and non-terminating
The effect assigned to combine-effects
are ndet
, div
, and exn
. We
can write such combination as a row of effects as <div,exn,ndet>
. When
you hover over the combine-effects
identifiers, you will see that the type
inferred is really <pure,ndet>
where pure
is a type alias defined as:
alias pure = <div,exn>
3.2.3. Polymorphic effects
Many functions are polymorphic in their effect. For example, the
map
function
applies a function f
to each element of a (finite) list. As such, the effect
depends on the effect of f
, and the type of map
becomes:
map : (xs : list<a>, f : (a) -> e b) -> e list<b>
We use single letters (possibly followed by digits) for polymorphic types.
Here, the map
functions takes a list with elements of some type a
, and a
function f
that takes an element of type a
and returns a new element of
type b
. The final result is a list with elements of type b
. Moreover,
the effect of the applied function e
is also the effect of the map
function itself; indeed, this function has no other effect by itself since it
does not diverge, nor raises exceptions.
We can use the notation <l|e>
to extend an effect e
with another effect
l
. This is used for example in the while
function which has type:
while : ( pred : () -> <div|e> bool, action : () -> <div|e> () ) -> <div|e> ()
.
The while
function takes a
predicate function and an action to perform, both with effect <div|e>
.
Indeed, since while may diverge depending on the predicate its effect must
include divergence.
The reader may be worried that the type of while
forces the predicate and
action to have exactly the same effect <div|e>
, which even includes
divergence. However, when effects are inferred at the call-site, both the
effects of predicate and action are extended automatically until they match.
This ensures we take the union of the effects in the predicate and action.
Take for example the following loop:
fun looptest() while { is-odd(srandom-int()) } throw("odd")
Koka infers that the predicate odd(srandom-int())
has
effect <ndet|e1>
while the action has effect <exn|e2>
for some e1
and e2
.
When applying while
, those
effects are unified to the type <exn,ndet,div|e3>
for some e3
.
3.2.4. Local Mutable Variables
The Fibonacci numbers are a sequence where each subsequent Fibonacci number is
the sum of the previous two, where fib(0) == 0
and fib(1) == 1
. We can
easily calculate Fibonacci numbers using a recursive function:
fun fib(n : int) : div int if n <= 0 then 0 elif n == 1 then 1 else fib(n - 1) + fib(n - 2)
Note that the type inference engine is currently not powerful enough to
prove that this recursive function always terminates, which leads to
inclusion of the divergence effect div
in the result type.
Here is another version of the Fibonacci function but this time
implemented using local mutable variables.
We use the repeat
function to iterate n
times:
fun fib2(n) var x := 0 var y := 1 repeat(n) val y0 = y y := x+y x := y0 x
In contrast to a val
declaration that binds an immutable value (as in val y0 = y
),
a var
declaration declares a mutable variable, where the (:=)
operator
can assign a new value to the variable. Internally, the var
declarations use
a state effect handler which ensures
that the state has the proper semantics even if resuming multiple times.
However, that also means that mutable local variables are not quite first-class and we cannot pass them as parameters to other functions for example (as they are always dereferenced). The lifetime of mutable local variable cannot exceed its lexical scope. For example, you get a type error if a local variable escapes through a function expression:
fun wrong() : (() -> console ()) var x := 1 (fn(){ x := x + 1; println(x) })
This restriction allows for a clean semantics but also for (future) optimizations that are not possible for general mutable reference cells.
Read more about state and multiple resumptions
3.2.5. Reference Cells and Isolated state
Koka also has first-class heap allocated mutable reference cells.
A reference to an
integer is allocated using val r = ref(0)
(since the reference itself is
actually a value!), and can be dereferenced using the bang operator, as !r
.
We can write the Fibonacci function using reference cells as:
fun fib3(n) val x = ref(0) val y = ref(1) repeat(n) val y0 = !y y := !x + !y x := y0 !x
As we can see, using var
declarations are generally preferred as these
behave better under multiple resumptions, but also are syntactically more
concise as they do not need a dereferencing operator. (Nevertheless, we
still need reference cells as those are first-class while var
variables
cannot be passed to other functions.)
When we look at the types inferred for the references, we see that x
and y
have type ref<h,int>
which stands for a reference to a mutable value of
type int
in some heap h
. The effects on heaps are allocation as
heap<h>
, reading from a heap as read<h>
and writing to a heap as
write<h>
. The combination of these effects is called stateful and denoted
with the alias st<h>
.
Clearly, the effect of the body of fib3
is st<h>
; but when we hover over
fib3
, we see the type inferred is actually the total
effect: (n:int) -> int
.
Indeed, even though fib3
is stateful inside, its side-effects can
never be observed. It turns out that we can safely discard the st<h>
effect whenever the heap type h
cannot be referenced outside this function,
i.e. it is not part of an argument or return type. More formally, the Koka
compiler proves this by showing that a function is fully polymorphic in the
heap type h
and applies the run
function (corresponding to runST
in
Haskell) to discard the st<h>
effect.
The Garsia-Wachs algorithm is a nice example where side-effects are used
internally across function definitions and data structures, but where the
final algorithm itself behaves like a pure function, see the
samples/basic/garsia-wachs.kk
.
3.3. Data Types
3.3.1. Structs
An important aspect of a function-oriented language is to be able to define rich data types over which the functions work. A common data type is that of a struct or record. Here is an example of a struct that contains information about a person:
struct person age : int name : string realname : string = name val brian = Person( 29, "Brian" )
Every struct
(and other data types) come with constructor functions to
create instances, as in Person(19,"Brian")
. Moreover, these
constructors can use named arguments so we can also call the constructor
as Person( name = "Brian", age = 19, realname = "Brian H. Griffin" )
which is quite close to regular record syntax but without any special rules;
it is just functions all the way down!
Also, Koka automatically generates accessor functions for each field in a
struct (or other data type), and we can access the age
of a person
as
brian.age
(which is of course just syntactic sugar for age(brian)
).
3.3.2. Copying
By default, all structs (and other data types) are immutable. Instead of
directly mutating a field in a struct, we usually return a new struct where
the fields are updated. For example, here is a birthday
function that
increments the age
field:
fun birthday( p : person ) : person p( age = p.age + 1 )
Here, birthday
returns a fresh person
which is equal to p
but with the
age
incremented. The syntax p(...)
is syntactic sugar for calling the copy constructor of
a person
. This constructor is also automatically generated for each data
type, and is internally generated as:
fun copy( p, age = p.age, name = p.name, realname = p.realname ) Person(age, name, realname)
When arguments follow a data value, as in p( age = age + 1)
, it is expanded to call this
copy function, as in p.copy( age = p.age+1 )
. In adherence with the min-gen principle,
there are no special rules for record updates but using plain function calls with optional
and named parameters.
3.3.3. Alternatives (or Unions)
Koka also supports algebraic data types where there are multiple alternatives. For example, here is an enumeration:
type color Red Green Blue
Special cases of these enumerated types are the void
type which has no
alternatives (and therefore there exists no value with this type), the unit
type ()
which has just one constructor, also written as ()
(and
therefore, there exists only one value with the type ()
, namely ()
), and
finally the boolean type bool
with two constructors True
and False
.
type void type () () type bool False True
Constructors can have parameters. For example, here is how to create a
number
type which is either an integer or the infinity value:
type number Infinity Integer( i : int )
We can create such number by writing Integer(1)
or Infinity
. Moreover,
data types can be polymorphic and recursive. Here is the definition of the
list
type which is either empty (Nil
) or is a head element followed by a
tail list (Cons
):
type list<a> Nil Cons{ head : a; tail : list<a> }
Koka automatically generates accessor functions for each named parameter. For
lists for example, we can access the head of a list as Cons(1,Nil).head
.
We can now also see that struct
types are just syntactic sugar for a regular
type
with a single constructor of the same name as the type:
translation
struct tp { <fields> }
$\mathpre{\rightsquigarrow}$
type tp { Tp { <fields> } }
For example,
our earlier person
struct, defined as
struct person{ age : int; name : string; realname : string = name }
desugars to:
type person Person{ age : int; name : string; realname : string = name }
or with brace elision as:
type person Person age : int name : string realname : string = name
3.3.4. Matching
3.3.5. Extensible Data Types
3.3.6. Inductive, Co-inductive, and Recursive Types
For the purposes of equational reasoning and termination checking, a type
declaration is limited to finite inductive types. There are two more
declarations, namely co type
and rec type
that allow for co-inductive types,
and arbitrary recursive types respectively.
3.3.7. Value Types
Value types are (non-recursive) data types that are not heap allocated but passed on the stack as a value. Since data types are immutable, semantically these types are equivalent but value types can be more efficient as they avoid heap allocation and reference counting (or more expensive as they need copying instead of sharing a reference).
By default, any non-recursive inductive data type of a size up to 3 machine words (= 24 bytes on a 64-bit platform) is treated as a value type. For example, tuples and 3-tuples are passed and returned by value. Usually, that means that such tuples are for example returned in registers when compiling with optimization.
We can also force a type to be compiled as a value type by using the value
keyword
in front of a type
or struct
declaration:
value struct argb{ alpha: int; color-red: int; color-green: int; color-blue: int }
advanced
Boxing
To support generic polymorphism, sometimes value types are boxed. For example, a list
is polymorphic in its elements. That means that if we construct a list of tuples, like
[(1,True)]
, that the element (1,2)
will be boxed and heap allocated – essentially
the compiler transforms this expression into [Box((1,True)]
internally.
Note that for regular data types and int
's boxing is free (as in isomorphic). Moreover, value types
up to 63 bits (on a 64-bit platform) are boxed in-place and do not require heap allocation
(like int32
). The float64
type is also specialized; by default the Koka compiler
only heap allocates float64
s when their absolute value is
outside the range 2-511 up to 2512 (excluding infinity and NaN)).
For performance sensitive code we may specialize certain polymorphic data types to reduce allocations due to boxing. For example:
type mylist MyCons{ head1: int; head2: bool; mytail: mylist } MyNil
Our previous example becomes MyCons(1,True,MyNil)
now and is more efficient as it only needs
one allocation for the MyCons
without an indirection to a tuple.
In the future we hope to extend Koka to perform specialization automatically or by
using special directives.
3.4. Effect Handlers
Effect handlers [9, 10, 17] are a novel way to define control-flow abstractions and dynamic binding as user defined handlers – no need anymore to add special compiler extensions for exceptions, iterators, async-await, probabilistic programming, etc. Moreover, these handlers can be composed freely so the interaction between, say, async-await and exceptions are well-defined.
3.4.1. Handling
Let's start with defining an exception effect of our own. The effect
declaration defines a new type together with operations, for now
we use the most general control (ctl
) operation:
effect raise ctl raise( msg : string ) : a
This defines an effect type raise
together with an operation
raise
of type (msg : string) -> raise a
. With the effect signature
declared, we can already use the operations:
fun safe-divide( x : int, y : int ) : raise int if y==0 then raise("div-by-zero") else x / y
where we see that the safe-divide
function gets the raise
effect
(since we use the raise
operation in the body). Such an effect
type means that we can only evaluate the function in a context
where raise
is handled (in other words, where it is “dynamically bound”, or
where we “have the raise
capability”).
We can handle the effect by giving a concrete definition for the raise
operation.
For example, we may always return a default value:
fun raise-const() : int with handler ctl raise(msg) 42 8 + safe-divide(1,0)
The call raise-const()
evaluates to 42
(not 50
).
When a raise
is called (in safe-divide
), it will yield to its innermost handler, unwind
the stack, and only then evaluate the operation definition – in this case just directly
returning 42
from the point where the handler is defined.
Now we can see why it is called a control
operation as raise
changes the regular linear control-flow and yields right
back to its innermost handler from the original call site.
Also note that raise-const
is total
again and the handler discharged the
raise
effect.
The handler{ <ops> }
expression is a function that itself expects a function
argument over which the handler is scoped, as in (handler{ <ops> })(action)
.
This works well in combination with the with
statement of course.
As a syntactic convenience, for single operations we can leave out the handler
keyword
which is translated as:
translation
with ctl op(<args>){ <body> }
$\mathpre{\rightsquigarrow}$
with handler ctl op(<args>){ <body> }
With this translation, we can write the previous example more concisely as:
fun raise-const1() : int with ctl raise(msg) 42 8 + safe-divide(1,0)
which eventually expands to (handler{ ctl raise(msg){ 42 } })(fn(){ 8 + safe-divide(1,0) })
.
We have a similar syntactic convenience for effects with one operation where the name of the effect and operation are the same. We can define such an effect by just declaring its operation which implicitly declares an effect type of the same name:
translation
effect ctl op(<parameters>) : <result-type>
$\mathpre{\rightsquigarrow}$
effect op { ctl op(<parameters>) : <result-type> }
That means we can declare our raise
effect signature also more concisely as:
effect ctl raise( msg : string ) : a
Read more about the with
statement
3.4.2. Resuming
The power of effect handlers is not just that we can yield to the innermost handler, but that we can also resume back to the call site with a result.
Let's define a ask<a>
effect that allows us to get a contextual value of type a
:
effect ask<a> // or: effect<a> ctl ask() : a ctl ask() : a fun add-twice() : ask<int> int ask() + ask()
The add-twice
function can ask for numbers but it is unaware of how these
are provided – the effect signature just specifies an contextual API.
We can handle it by always resuming with a constant for example:
fun ask-const() : int with ctl ask() resume(21) add-twice()
where ask-const()
evaluates to 42
. Or by returning random values, like:
fun ask-random() : random int with ctl ask() resume(random-int()) add-twice()
where ask-random()
now handled the ask<int>
effect, but itself now has
random
effect (see std/num/random
).
The resume
function is implicitly bound by a ctl
operation and resumes
back to the call-site with the given result.
As we saw in the exception example, we do
not need to call resume
and can also directly return into our handler scope. For example, we
may only want to handle a ask
once, but after that give up:
fun ask-once() : int var count := 0 with ctl ask() count := count + 1 if count <= 1 then resume(42) else 0 add-twice()
Here ask-once()
evaluates to 0
since the second call to ask
does not resume,
(and returns directly 0
in the ask-once
context). This pattern can for example
be used to implement the concept of fuel in a setting where a computation is
only allowed to take a limited amount of steps.
Read more about var
mutable variables
3.4.3. Tail-Resumptive Operations
A ctl
operation is one of the most general ways to define operations since
we get a first-class resume
function. However, almost all operations in practice turn out
to be tail-resumptive: that is, they resume exactly once with their final result
value. To make this more convenient, we can declare fun
operations that do this
by construction, i.e.
translation
with fun op(<args>){ <body> }
$\mathpre{\rightsquigarrow}$
with ctl op(<args>){ val f = fn(){ <body> }; resume( f() ) }
(The translation is defined via an intermediate function f
so return
works as expected).
With this syntactic sugar, we can write our earlier ask-const
example
using a fun
operation instead:
fun ask-const2() : int with fun ask() 21 add-twice()
This also conveys better that even though ask
is dynamically bound, it behaves
just like a regular function without changing the control-flow.
Moreover, operations declared as fun
are much more efficient than general
ctl
operations. The Koka compiler uses (generalized) evidence passing [19–21]
to pass down handler information to each call-site. At the call to ask
in add-twice
,
it selects the handler from the evidence vector and when the operation is
a tail-resumptive fun
, it calls it directly as a regular function (except with an adjusted evidence
vector for its context). Unlike a general ctl
operation, there is no need to yield upward
to the handler, capture the stack, and eventually resume again.
This gives fun
(and val
) operations a performance cost very similar to virtual method calls
which can be quite efficient.
For even a bit more performance, you can also declare upfront that any operation definition must be tail-resumptive, as:
effect ask<a> fun ask() : a
This restricts all handler definitions for the ask
effect to use fun
definitions
for the ask
operation. However, it increases the ability to reason about the code,
and the compiler can optimize such calls a bit more as it no longer needs to check at
run-time if the handler happens to define the operation as tail-resumptive.
advancedFor even better performance, one can mark the effect as linear (Section 3.4.12).
Such effects are statically guaranteed to never use a general control operation and
never need to capture a resumption. During compilation, this removes the need for the monadic transformation
and improves performance of any effect polymorphic function that uses such effects as well
(like map
or foldr
). Examples of linear effects are state (st
) and builtin effects
(like io
or console
).
Value Operations
A common subset of operations always tail-resume with a single value; these are
essentially dynamically bound variables (but statically typed!). Such operations
can be declared as a val
with the following translation:
translation
with val v = <expr>
$\mathpre{\rightsquigarrow}$
val x = <expr> with fun v(){ x }
$\mathpre{\rightsquigarrow}$
val x = <expr> with ctl v(){ resume(x) }
For an example of the use of value operations, consider a pretty printer that produces pretty strings from documents:
fun pretty( d : doc ) : string
Unfortunately, it has a hard-coded maximum display width of 40
deep
down in the code of pretty
:
fun pretty-internal( line : string ) : string line.truncate(40)
To abstract over the width we have a couple of choices: we could make the width a regular parameter but now we need to explicitly add the parameter to all functions in the library and manually thread them around. Another option is a global mutable variable but that leaks side-effects and is non-modular.
Or, we can define it as a value operation instead:
effect val width : int
This also allows us to refer to the width
operation as if it was a
regular value (even though internally it invokes the operation).
So, the check for the width in the pretty printer can be written as:
fun pretty-internal( line : string ) : width string line.truncate(width)
When using the pretty printer we can bind the width
as a
regular effect handler:
fun pretty-thin(d : doc) : string with val width = 40 pretty(d)
Note that we did not need to change the structure of the
original library functions. However the types of the functions
still change to include the width
effect as these now
require the width
value to be handled at some point.
For example, the type of pretty
becomes:
fun pretty( d : doc ) : width string
as is requires the width
effect to be handled (aka,
the "dynamic binding for width : int
to be defined“,
aka, the ”width
capability").
3.4.4. Abstracting Handlers
As another example, a writer effect is quite common where
values are collected by a handler. For example, we can
define an emit
effect to emit messages:
effect fun emit( msg : string ) : ()
fun ehello() : emit () emit("hello") emit("world")
We can define for example a handler that prints the emitted messages directly to the console:
fun ehello-console() : console () with fun emit(msg) println(msg) ehello()
Here the handler is defined directly, but we can also abstract the handler for emitting to the console into a separate function:
fun emit-console( action ) with fun emit(msg) println(msg) action()
where emit-console
has the inferred type (action : () -> <emit,console|e> a) -> <console|e> a
(hover
over the source to see the inferred types) where
the action can have use the effects emit
, console
, and any other effects e
,
and where the final effect is just <console|e>
as the emit
effect
is discharged by the handler.
Note, we could have written the above too as:
val emit-console2 = handler fun emit(msg) println(msg)
since a handler{ ... }
expression is a function itself (and thus a value).
Generally we prefer the earlier definition though as it allows further parameters
like an initial state.
Since with
works generally, we can use the abstracted handlers just like
regular handlers, and our earlier example can be written as:
fun ehello-console2() : console () with emit-console ehello()
(which expands to emit-console( fn(){ ehello() } )
).
Another useful handler may collect all emitted messages as a list of lines:
fun emit-collect( action : () -> <emit|e> () ) : e string var lines := [] with handler return(x) lines.reverse.join("\n") fun emit(msg) lines := Cons(msg,lines) action() fun ehello-commit() : string with emit-collect ehello()
This is a total handler and only discharges the emit
effect.
Read more about the with
statement
Read more about var
mutable variables
As another example, consider a generic catch
handler that
applies an handling function when raise
is called on our
exception example:
fun catch( hnd : (string) -> e a, action : () -> <raise|e> a ) : e a with ctl raise(msg) hnd(msg) action()
We can use it now conveniently with a with
statement to handle
exceptional situations:
fun catch-example() with catch( fn(msg){ println("error: " ++ msg); 42 } ) safe-divide(1,0)
advancedThe catch
handler has an interesting type where the action can
have a raise
effect (() -> <raise|e> a
) and maybe further effects e
,
while the handling function hnd
only has effect e
. Now consider
supplying a handing function that itself calls raise
: in that case, the
type of catch
would be instantiated to: (hnd: (string) -> <raise> a, action : () -> <raise, raise> a ) : <raise> a
.
This is correct: the (outer) raise
effect of action
is handled and discharged, but since
the handling function hnd
can still cause raise
to be called, the final effect still contains raise
.
Here we see that Koka allows duplicate effect labels [8] where action
has
an instantiated <raise,raise>
effect type.
These kind of types occur naturally in the presence of polymorphic effects, and there is a natural correspondence
to the structure of the evidence vectors at runtime (with entries for each nested effect handler).
Intuitively, the action
effect expresses that
its outer (left-most) raise
is handled, but that there may be other exceptions that are not handled – in this
case from the handling function hnd
, but they can also be masked exceptions (as described in Section 3.4.7).
3.4.5. Return Operations
In the previous emit-collect
example we saw the use of
a return
operation. Such operation changes the final
result of the action of a handler.
For example, consider our earlier used-defined exception effect raise
.
We can define a general handler that transforms any exceptional
action into one that returns a maybe
type:
fun raise-maybe( action : () -> <raise|e> a ) : e maybe<a> with handler return(x) Just(x) // normal return: wrap in Just ctl raise(msg) Nothing // exception: return Nothing directly action() fun div42() (raise-maybe{ safe-divide(1,0) }).default(42)
(where the body of div42
desugars to default( raise-maybe(fn(){ safe-divide(1,0) }), 42 )
).
Read more about function block expressions
Read more about dot expressions
A State Effect
For more examples of the use of return
operations, we look at a the state effect.
In its most general form it has just a set
and get
operation:
effect state<a> fun get() : a fun set( x : a ) : () fun sumdown( sum : int = 0 ) : <state<int>,div> int val i = get() if i <= 0 then sum else set( i - 1 ) sumdown( sum + i )
We can define a generic state handler most easily by using var
declarations:
fun state( init : a, action : () -> <state<a>,div|e> b ) : <div|e> b var st := init with handler fun get() st fun set(i) st := i action()
where state(10){ sumdown() }
evaluates to 55
.
Read more about default parameters
Read more about trailing lambdas
Read more about var
mutable variables
Building on the previous state example, suppose we also like to return the final state. A nice way to do this is to use a return operation again to pair the final result with the final state:
fun pstate( init : a, action : () -> <state<a>,div|e> b ) : <div|e> (b,a) var st := init with handler return(x) (x,st) // pair with the final state fun get() st fun set(i) st := i action()
where pstate(10){ sumdown() }
evaluates to (55,0)
.
advancedIt is even possible to have a handler that only
contains a single return
operation: such handler handles no effect
at all but only transforms the final result of a function.
For example, we can define the previous example also with
a separate return
handler as:
fun pstate2( init : a, action : () -> <state<a>,div|e> b ) : <div|e> (b,a) var st := init with return(x) (x,st) with handler fun get() st fun set(i) st := i action()
Here it as a bit contrived but it can make certain programs more concise in their definition, see for example [5].
3.4.6. Combining Handlers
advancedWhat makes effect handlers a good control-flow abstraction? There are three fundamental advantages with regard to other approaches:
- Effect handlers can have simple (Hindley-Milner) types. This unlike
shift
/reset
for example as that needs type rules with answer types (as the type ofshift
depends on the context of its matchingreset
). - The scope of an effect handler is delimited by the handler definition. This is just like
shift
/reset
but unlikecall/cc
. Delimiting the scope of a resumption has various good properties, like efficient implementation strategies, but also that it allows for modular composition (see also Oleg Kiselyov's “against call/cc”). - Effect handlers can be composed freely. This is unlike general monads which need monad transformers to
compose in particular ways. Essentially effect handlers can compose freely because every effect handler
can be expressed eventually as an instance of a free monad which do compose. This also means that
some monads cannot be expressed as an effect handler (namely the non-algebraic ones). A particular example
of this is the continuation monad (which can express
call/cc
).
The Koka compiler internally uses monads and shift
/reset
to compile effect handlers though, and
it compiles handlers into to an internal free monad based on multi-prompt delimited control [4, 21].
By inlining the monadic bind we are able to generate efficient C code that only allocates continuations
in the case one is actually yielding up to a general ctl
operation.
A great property of effect handlers is that they can be freely composed together.
For example, suppose we have a function
that calls raise
if the state is an odd number:
fun no-odds() : <raise,state<int>> int val i = get() if i.is-odd then raise("no odds") else set(i / 2) i
then we can compose a pstate
and raise-maybe
handler together
to handle the effects:
fun state-raise(init) : div (maybe<int>,int) with pstate(init) with raise-maybe no-odds()
where both the state<int>
and raise
effects are discharged by the respective handlers.
Note the type reflects that we always return a pair with as a first element either
Nothing
(if raise
was called) or a Just
with the final result, and as the second element
the final state. This corresponds to how we usually combine state and exceptions where the
state (or heap) has set to the state at the point the exception happened.
However, if we combine the handlers in the opposite order, we get a form of transactional state where we either get an exception (and no final state), or we get a pair of the result with the final state:
fun raise-state(init) : div maybe<(int,int)> with raise-maybe with pstate(init) no-odds()
3.4.7. Masking Effects
Similar to masking signals in Unix, we can mask effects to not be handled by
their innermost effect handler. The expression mask<eff>(action)
modularly masks
any effect operations in eff
inside the action
. For example,
consider two nested handlers for the emit
operation:
fun mask-emit() with fun emit(msg) println("outer:" ++ msg) with fun emit(msg) println("inner:" ++ msg) emit("hi") mask<emit> emit("there")
If we call mask-emit()
it prints:
inner: hi
outer: there
The second call to emit
is masked and therefore it skips the innermost
handler and is handled subsequently by the outer handler (i.e. mask only
masks an operation once for its innermost handler).
The type of mask<l>
for some effect label l
is (action: () -> e a) -> <l|e> a
where it injects the effect l
into the final effect result <l|e>
(even
thought the mask
itself never
actually performs any operation in l
– it only masks any operations
of l
in action
).
This type usually leads to duplicate effect labels, for example,
the effect of mask<emit>{ emit("there") }
is <emit,emit>
signifying
that there need to be two handlers for emit
: in this case, one to skip
over, and one to subsequently handle the masked operation.
Effect Abstraction
The previous example is not very useful, but generally we can
use mask
to hide internal effect handling from higher-order functions.
For example, consider the following function that needs to handle
internal exceptions:
fun mask-print( action : () -> e int ) : e int with ctl raise(msg) 42 val x = mask<raise>(action) if x.is-odd then raise("wrong") // internal exception x
Here the type of mask-print
does not expose at all that we handle the raise
effect internally for specific code and it is fully abstract – even if the action itself would call raise
,
it would neatly skip the internal handler due to the mask<raise>
expression.
If we would leave out the mask
, and call action()
directly, then the inferred
type of action
would be () -> <raise|e> int
instead, showing that the raise
effect would be handled.
Note that this is usually the desired behaviour since in the majority of cases
we want to handle the effects in a particular way when defining handler abstractions.
The cases where mask
is needed are much less common in our experience.
advanced
State as a Combined Effect
Another nice use-case for mask
occurs when modeling state directly using
effect handlers without using mutable local variables [1]. We can do this
using two separate operations peek
and poke
:
effect<a> val peek : a // get the state effect<a> ctl poke( x : a ) : () // set the state to x
We can now define a generic state handler as:
fun ppstate( init : a, action : () -> <peek<a>,poke<a>|e> b ) : e b with val peek = init with ctl poke(x) mask<peek> with val peek = x resume(()) action()
In the handler for poke
we resume under a fresh handler for peek
that
is bound to the new state. This means though there will be an ever increasing
“stack” of handlers for peek
. To keep the type from growing infinitely, we
need to mask out any potential operation to a previous handler of peek
which
is why the mask
is needed. (Another way of looking at this is to just follow
the typing: action
has a peek
effect, and unifies with the effect of
the poke
operation definition. Since it handles its own peek
effect, it needs
to be injected back in with a mask
.)
(Note: since the handler stack grows indefinitely on every poke
this example
is mostly of theoretical interest. However, we are looking into a stack smashing
technique where we detect at runtime that a mask
can discard a handler frame
from the stack.)
3.4.8. Overriding Handlers
A common use for masking is to override handlers. For example, consider
overriding the behavour of emit
:
fun emit-quoted1( action : () -> <emit,emit|e> a ) : <emit|e> a with fun emit(msg) emit("\"" ++ msg ++ "\"") action()
Here, the handler for emit
calls itself emit
to actually emit the newly
quoted string. The effect type inferred for emit-quoted1
is (action : () -> <emit,emit|e> a) -> <emit|e> a
.
This is not the nicest type as it exposes that action
is evaluated under (at least) two
emit
handlers (and someone could use mask
inside action
to use the outer emit
handler).
The override
keyword keeps the type nice and fully overrides the
previous handler which is no longer accessible from action
:
fun emit-quoted2( action : () -> <emit|e> a ) : <emit|e> a with override fun emit(msg) emit("\"" ++ msg ++ "\"" ) action()
This of course applies to any handler or value, for example,
to temporarily increase the width
while pretty printing,
we can override the width
as:
fun extra-wide( action ) with override val width = 2*width action()
advanced
Mask Behind
Unfortunately, we cannot modularly define overriding with just mask
; if we
add mask
outside of the emit
handler, the emit
call inside the operation
definition would get masked and skip our intended handler. On the other hand,
if we add mask
just over action
all its emit
calls would be masked for
our intended handler!
For this situation, there is another primitive that only “masks the masks”.
The expression mask behind<eff>
has type (() -> <eff|e> a) -> <eff,eff|e> a
and only masks any masked operations but not the direct ones. The override
keyword is defined in terms of this primitive:
translation
with override handler<eff> { <ops> } <body>
$\mathpre{\rightsquigarrow}$
(handler<eff> { <ops> })(mask behind<eff>{ <body> })
This ensures any operation calls in <body>
go the newly defined
handler while any masked operations are masked one more level and skip
both of the two innermost handlers.
3.4.9. Side-effect Isolation
3.4.10. Resuming more than once
Since resume
is a first-class function (well, almost, see raw control),
it is possible to store it in a list for example to implement a scheduler,
but it is also possible to invoke it more than once. This can be used to
implement backtracking or probabilistic programming models.
A common example of multiple resumptions is the choice
effect:
effect ctl choice() : bool fun xor() : choice bool val p = choice() val q = choice() if p then !q else q
One possible implementation just uses random numbers:
fun choice-random(action : () -> <choice,random|e> a) : <random|e> a with fun choice() random-bool() action()
Where choice-random(xor)
returns True
and False
at random.
However, we can also resume multiple times, once with False
and once with True
,
to return all possible outcomes. This also changes the handler type to return a list
of all results of the action, and we need a return clause to wrap the result
of the action in a singleton list:
fun choice-all(action : () -> <choice|e> a) : e list<a> with handler return(x) [x] ctl choice() resume(False) ++ resume(True) action()
where choice-all(xor)
returns [False,True,True,False]
.
Resuming more than once interacts in interesting ways with the
state effect. Consider the following example that uses both
choice
and state
:
fun surprising() : <choice,state<int>> bool val p = choice() val i = get() set(i+1) if i>0 && p then xor() else False
We can combine the handlers in two interesting ways:
fun state-choice() : div (list<bool>,int) pstate(0) choice-all(surprising) fun choice-state() : div list<(bool,int)> choice-all pstate(0,surprising)
In state-choice()
the pstate
is the outer handler and becomes like a global
state over all resumption strands in choice-all
, and thus after the first resume
the i>0 (&&) p
condition in surprising
is True
, and we get ([False,False,True,True,False],2)
.
In choice-state()
the pstate
is the inner handler and becomes like transactional state,
where the state becomes local to each resumption strand in choice-all
.
Now i
is always 0
at first and thus we get [(False,1),(False,1)]
.
advancedThis example also shows how var
state is correctly saved and restored on resumptions
(as part of the stack) and this is essential to the correct composition of effect handlers.
If var
declarations were instead heap allocated or captured by reference, they would no
longer be local to their scope and side effects could “leak” across different resumptions.
3.4.11. Initially and Finally
With arbitrary effect handlers we need to be careful when interacting with external
resources like files. Generally, operations can never resume (like exceptions),
resume exactly once (giving the usual linear control flow), or resume more than once.
To robustly handle these different cases, Koka provides the finally
and initially
functions. Suppose we have the following low-level file operations on file handles:
type fhandle fun fopen( path : string ) : <exn,filesys> fhandle fun hreadline( h : fhandle ) : <exn,filesys> string fun hclose( h : fhandle ) : <exn,filesys> ()
Using these primitives, we can declare a fread
effect to read from a file:
effect fun fread() : string fun with-file( path : string, action : () -> <fread,exn,filesys|e> a ) : <exn,filesys|e> a val h = fopen(path) with handler return(x) { hclose(h); x } fun fread() hreadline(h) action()
However, as it stands it would fail to close the file handle if an exceptional
effect inside action
is used (i.e. or any operation that never resumes).
The finally
function handles these situations, and
takes as its first argument a function that is always executed when either returning normally, or
when unwinding for a non-resuming operation. So, a more robust way to write
with-file
is:
fun with-file( path : string, action : () -> <fread,exn,filesys|e> a ) : <exn,filesys|e> a val h = fopen(path) with finally hclose(h) with fun fread() hreadline(h) action()
The current definition is robust for operations that never resume, or operations that resume once
– but there is still trouble when resuming more than once. If someone calls choice
inside
the action
, the second time it
resumes the file handle will be closed again which is probably not intended. There is
active research into using the type system to statically prevent this from happening.
Another way to work with multiple resumptions is to use the initially
function.
This function takes 2 arguments: the first argument is a function that is called
the first time initially
is evaluated, and subsequently every time a particular resumption is
resumed more than once.
Raw Control
advancedUse raw ctl
for raw control operations which do not automatically
finalize. With raw ctl
one can use the implicitly bound
resumption context rcontext
to either resume (as rcontext.resume(x)
),
or to finalize a resumption (as rcontext.finalize
) which runs all
finally
handlers to clean up resources. This allows one to store an rcontext
as a first class value to resume or finalize later even from a different
scope. Of course, it needs to be used with care since it is now the
programmers' resposibility to ensure the resumption is eventually resumed
or finalized (such that any resources can be released).
3.4.12. Linear Effects
Todo.
Use linear effect
to declare effects whose operations are always tail-resumptive
and use only linear effects themselves
(and thus resume exactly once). This removes monadic translation for such effects and
can make code that uses only linear effects more compact and efficient.
3.4.13. Named and Scoped Handlers
Todo.
See samples/named-handlers
.
3.5. FBIP: Functional but In-Place
With Perceus reuse analysis we can
write algorithms that dynamically adapt to use in-place mutation when
possible (and use copying when used persistently). Importantly,
you can rely on this optimization happening, e.g. see
the match
patterns and pair them to same-sized constructors in each branch.
This style of programming leads to a new paradigm that we call FBIP: “functional but in place”. Just like tail-call optimization lets us describe loops in terms of regular function calls, reuse analysis lets us describe in-place mutating imperative algorithms in a purely functional way (and get persistence as well).
Note. FBIP is still active research. In particular we'd like to add ways to add annotations to ensure reuse is taking place.
3.5.1. Tree Rebalancing
As an example, we consider
insertion into a red-black tree [3].
A polymorphic version of this example is part of the samples
directory when you have
installed Koka and can be loaded as :l
samples/basic/rbtree
.
We define red-black trees as:
type color Red Black type tree Leaf Node(color: color, left: tree, key: int, value: bool, right: tree)
The red-black tree has the invariant that the number of black nodes from the root to any of the leaves is the same, and that a red node is never a parent of red node. Together this ensures that the trees are always balanced. When inserting nodes, the invariants need to be maintained by rebalancing the nodes when needed. Okasaki's algorithm [15] implements this elegantly and functionally:
fun balance-left( l : tree, k : int, v : bool, r : tree ): tree match l Node(_, Node(Red, lx, kx, vx, rx), ky, vy, ry) -> Node(Red, Node(Black, lx, kx, vx, rx), ky, vy, Node(Black, ry, k, v, r)) ... fun ins( t : tree, k : int, v : bool ): tree match t Leaf -> Node(Red, Leaf, k, v, Leaf) Node(Red, l, kx, vx, r) -> if k < kx then Node(Red, ins(l, k, v), kx, vx, r) ... Node(Black, l, kx, vx, r) -> if k < kx (&&) is-red(l) then balance-left(ins(l,k,v), kx, vx, r) ...
The Koka compiler will inline the balance-left
function. At that point,
every matched Node
constructor in the ins
function has a corresponding Node
allocation –
if we consider all branches we can see that we either match one Node
and allocate one, or we match three nodes deep and allocate three. Every
Node
is actually reused in the fast path without doing any allocations!
When studying the generated code, we can see the Perceus assigns the
fields in the nodes in the fast path in-place much like the
usual non-persistent rebalancing algorithm in C would do.
Essentially this means that for a unique tree, the purely functional algorithm above adapts at runtime to an in-place mutating re-balancing algorithm (without any further allocation). Moreover, if we use the tree persistently [16], and the tree is shared or has shared parts, the algorithm adapts to copying exactly the shared spine of the tree (and no more), while still rebalancing in place for any unshared parts.
3.5.2. Morris Traversal
As another example of FBIP, consider mapping a function f
over
all elements in a binary tree in-order as shown in the tmap-inorder
example:
type tree Tip Bin( left: tree, value : int, right: tree ) fun tmap-inorder( t : tree, f : int -> int ) : tree match t Bin(l,x,r) -> Bin( l.tmap-inorder(f), f(x), r.tmap-inorder(f) ) Tip -> Tip
This is already quite efficient as all the Bin
and Tip
nodes are
reused in-place when t
is unique. However, the tmap
function is not
tail-recursive and thus uses as much stack space as the depth of the
tree.
void inorder( tree* root, void (*f)(tree* t) ) {
tree* cursor = root;
while (cursor != NULL /* Tip */) {
if (cursor->left == NULL) {
// no left tree, go down the right
f(cursor->value);
cursor = cursor->right;
} else {
// has a left tree
tree* pre = cursor->left; // find the predecessor
while(pre->right != NULL && pre->right != cursor) {
pre = pre->right;
}
if (pre->right == NULL) {
// first visit, remember to visit right tree
pre->right = cursor;
cursor = cursor->left;
} else {
// already set, restore
f(cursor->value);
pre->right = NULL;
cursor = cursor->right;
}
}
}
}
In 1968, Knuth posed the problem of visiting a tree in-order while using no extra stack- or heap space [7] (For readers not familiar with the problem it might be fun to try this in your favorite imperative language first and see that it is not easy to do). Since then, numerous solutions have appeared in the literature. A particularly elegant solution was proposed by Morris [14]. This is an in-place mutating algorithm that swaps pointers in the tree to “remember” which parts are unvisited. It is beyond this tutorial to give a full explanation, but a C implementation is shown here on the side. The traversal essentially uses a right-threaded tree to keep track of which nodes to visit. The algorithm is subtle, though. Since it transforms the tree into an intermediate graph, we need to state invariants over the so-called Morris loops [12] to prove its correctness.
We can derive a functional and more intuitive solution using the FBIP
technique. We start by defining an explicit visitor data structure
that keeps track of which parts of the tree we still need to visit. In
Koka we define this data type as visitor
:
type visitor Done BinR( right:tree, value : int, visit : visitor ) BinL( left:tree, value : int, visit : visitor )
(As an aside,
Conor McBride [13] describes how we can
generically derive a zipper [6] visitor for any
recursive type $\mathpre{\mu \mathid{x}.~\mathid{F}}$ as a list of the derivative of that type,
namely $\mathpre{\mathkw{list}~(\pdv{\mathid{x}}~\mathid{F}\mid_{\mathid{x}~=\mu \mathid{x}.\mathid{F}})}$.
In our case, the algebraic representation of the inductive tree
type is $\mathpre{\mu \mathid{x}.~1~+~\mathid{x}\times \mathid{int}\times \mathid{x}~~\,\cong\,~\mu \mathid{x}.~1~+~\mathid{x}^2\times \mathid{int}}$.
Calculating the derivative $\mathpre{\mathkw{list}~(\pdv{\mathid{x}}~(1~+~\mathid{x}^2\times \mathid{int})~\mid_{\mathid{x}~=~\mathid{tree}})}$
and by further simplification,
we get $\mathpre{\mu \mathid{x}.~1~+~(\mathid{tree}\times \mathid{int}\times \mathid{x})~+~(\mathid{tree}\times \mathid{int}\times \mathid{x})}$,
which corresponds exactly to our visitor
data type.)
We also keep track of which direction
in the tree
we are going, either Up
or Down
the tree.
type direction Up Down
We start our traversal by going downward into the tree with an empty
visitor, expressed as tmap(f, t, Done, Down)
:
fun tmap( f : int -> int, t : tree, visit : visitor, d : direction ) match d Down -> match t // going down a left spine Bin(l,x,r) -> tmap(f,l,BinR(r,x,visit),Down) // A Tip -> tmap(f,Tip,visit,Up) // B Up -> match visit // go up through the visitor Done -> t // C BinR(r,x,v) -> tmap(f,r,BinL(t,f(x),v),Down) // D BinL(l,x,v) -> tmap(f,Bin(l,x,t),v,Up) // E
The key idea is that we
are either Done
(C
), or, on going downward in a left spine we
remember all the right trees we still need to visit in a BinR
(A
) or,
going upward again (B
), we remember the left tree that we just
constructed as a BinL
while visiting right trees (D
). When we come
back up (E
), we restore the original tree with the result values. Note
that we apply the function f
to the saved value in branch D
(as we
visit in-order), but the functional implementation makes it easy to
specify a pre-order traversal by applying f
in branch A
, or a
post-order traversal by applying f
in branch E
.
Looking at each branch we can see that each Bin
matches up with a
BinR
, each BinR
with a BinL
, and finally each BinL
with a Bin
.
Since they all have the same size, if the tree is unique, each branch
updates the tree nodes in-place at runtime without any allocation,
where the visitor
structure is effectively overlaid over the tree
nodes while traversing the tree. Since all tmap
calls are tail calls,
this also compiles to a tight loop and thus needs no extra stack- or heap
space.
Finally, just like with re-balancing tree insertion, the algorithm as specified is still purely functional: it uses in-place updating when a unique tree is passed, but it also adapts gracefully to the persistent case where the input tree is shared, or where parts of the input tree are shared, making a single copy of those parts of the tree.
Read the Perceus technical report
4. Koka language specification
This is the draft language specification of the Koka language, version v2.4.0
Currently only the lexical and context-free grammar are specified.
The standard libraries are documented separately.
4.1. Lexical syntax
We define the grammar and lexical syntax of the language using standard BNF notation where non-terminals are generated by alternative patterns:
nonterm | ::= | pattern1 | pattern2 |
In the patterns, we use the following notations:
terminal | A terminal symbol (in ASCII) | |
x1B | A character with hexadecimal code 1B | |
A..F | The characters from “A” to “F” (using ASCII, i.e. x61..x66 ) | |
( pattern ) | Grouping | |
[ pattern ] | Optional occurrence of pattern | |
{ pattern } | Zero or more occurrences of pattern | |
{ pattern }n | Exactly n occurrences of pattern | |
pattern1 | pattern2 | Choice: either pattern1 or pattern2 | |
pattern<!diff> | Difference: elements generated by pattern except those in diff | |
nonterm[lex] | Generate nonterm by drawing lexemes from lex |
Care must be taken to distinguish meta-syntax such as | and )
from concrete terminal symbols as |
and )
. In the specification
the order of the productions is not important and at each point the
longest matching lexeme is preferred. For example, even though
fun
is a reserved word, the word functional
is considered a
single identifier.
4.1.1. Source code
Source code consists of a sequence of unicode characters. Valid characters in actual program code consist strictly of ASCII characters which range from 0 to 127. Only comments, string literals, and character literals are allowed to contain extended unicode characters. The grammar is designed such that a lexical analyzer and parser can directly work on UTF-8 encoded source files without actually doing UTF-8 decoding or unicode category identification.
4.2. Lexical grammar
In the specification of the lexical grammar all white space is explicit and there is no implicit white space between juxtaposed symbols. The lexical token stream is generated by the non-terminal lex which consists of lexemes and whitespace.
Before doing lexical analysis, there is a linefeed character inserted at the start and end of the input, which makes it easier to specify line comments and directives.
4.2.1. Lexical tokens
lex | ::= | lexeme | whitespace | |
lexeme | ::= | conid | qconid | |
| | varid | qvarid | ||
| | op | opid | qopid | wildcard | ||
| | integer | float | stringlit | charlit | ||
| | reserved | opreserved | ||
| | special |
The main program consists of whitespace or lexeme's. The context-free grammar will draw it's lexemes from the lex production.
4.2.2. Identifiers
anyid | ::= | varid | qvarid | opid | qopid | conid | qconid | |
qconid | ::= | modulepath conid | |
qvarid | ::= | modulepath lowerid | |
modulepath | ::= | lowerid / { lowerid / } | |
conid | ::= | upperid | |
varid | ::= | lowerid<!reserved> | |
lowerid | ::= | lower idtail | |
upperid | ::= | upper idtail | |
wildcard | ::= | _ idtail | |
typevarid | ::= | letter { digit } | |
idtail | ::= | { idchar } [ idfinal ] | |
idchar | ::= | letter | digit | _ | - | |
idfinal | ::= | { ' } | |
reserved | ::= | infix | infixr | infixl | |
| | module | import | as | ||
| | pub | abstract | ||
| | type | struct | alias | effect | con | ||
| | forall | exists | some | ||
| | fun | fn | val | var | extern | ||
| | if | then | else | elif | ||
| | match | return | with | in | ||
| | handle | handler | mask | ||
| | ctl | final | raw | ||
| | override | named | ||
| | interface | break | continue | unsafe | (future reserved words) | |
specialid | ::= | co | rec | open | extend | behind | |
| | linear | value | reference | ||
| | inline | noinline | initially | finally | ||
| | js | c | cs | file |
Identifiers always start with a letter, may contain underscores and dashes, and can end with prime ticks. Like in Haskell, constructors always begin with an uppercase letter while regular identifiers are lowercase. The rationale is to visibly distinguish constants from variables in pattern matches. Here are some example of valid identifiers:
x concat1 visit-left is-nil x' Cons True
To avoid confusion with the subtraction operator, the occurrences of dashes are restricted in identifiers. After lexical analysis, only identifiers where each dash is surrounded on both sides with a letter are accepted:
fold-right
n-1 // illegal, a digit cannot follow a dash
n - 1 // n minus 1
n-x-1 // illegal, a digit cannot follow a dash
n-x - 1 // identifier "n-x" minus 1
n - x - 1 // n minus x minus 1
Qualified identifiers are prefixed with a module path. Module paths can be partial as long as they are unambiguous.
core/map
std/core/(&)
4.2.3. Operators and symbols
qopid | ::= | modulepath opid | |
opid | ::= | ( symbols ) | |
op | ::= | symbols<!opreserved | optype> | || | |
symbols | ::= | symbol { symbol }| / | |
symbol | ::= | $ | % | & | * | + | |
| | ~ | ! | \ | ^ | # | ||
| | = | . | : | - | ? | ||
| | anglebar | ||
anglebar | ::= | < | > | | | |
opreserved | ::= | = | . | : | -> | |
optype | ::= | anglebar anglebar { anglebar } | |
special | ::= | { | } | ( | ) | [ | ] | | | ; | , | |
4.2.4. Literals
charlit | ::= | ' (char<!' | \ > | escape) ' | |
stringlit | ::= | " { char<!" | \ > | escape } " | |
| | r { # }n" rawcharsn" { # }n | (n >= 0) | |
rawcharsn | ::= | { anychar }<!{ anychar } " { # }n { anychar }> | |
escape | ::= | \ ( charesc | hexesc ) | |
charesc | ::= | n | r | t | \ | " | ' | |
hexesc | ::= | x { hexdigit }2 | u { hexdigit }4 | U { hexdigit }6 | |
float | ::= | [ - ] (decfloat | hexfloat) | |
decfloat | ::= | decimal (. digits [ decexp ] | decexp) | |
decexp | ::= | (e | E ) exponent | |
hexfloat | ::= | hexadecimal (. hexdigits [ hexexp ] | hexexp) | |
hexexp | ::= | (p | P ) exponent | |
exponent | ::= | [ - | + ] digit { digit } | |
integer | ::= | [ - ] (decimal | hexadecimal) | |
decimal | ::= | 0 | posdigit [ [ _ ] digits ] | |
hexadecimal | ::= | 0 (x | X ) hexdigits | |
digits | ::= | digit { digit } { _ digit { digit } } | |
hexdigits | ::= | hexdigit { hexdigit } { _ hexdigit { hexdigit } } |
4.2.5. White space
whitespace | ::= | white { white } | newline | |
white | ::= | space | |
| | linecomment | blockcomment | ||
| | linedirective | ||
linecomment | ::= | // { char | tab } | |
linedirective | ::= | newline # { char | tab } | |
blockcomment | ::= | /* blockpart { blockcomment blockpart } */ | (allows nested comments) |
blockpart | ::= | { anychar }<!{ anychar } (/* |*/ ) { anychar }> |
4.2.6. Character classes
letter | ::= | upper | lower | |
upper | ::= | A..Z | (i.e. x41..x5A ) |
lower | ::= | a..z | (i.e. x61..x7A ) |
digit | ::= | 0..9 | (i.e. x30..x39 ) |
posdigit | ::= | 1..9 | |
hexdigit | ::= | a..f | A..F | digit | |
anychar | ::= | char | tab | newline | (in comments and raw strings) |
newline | ::= | [ return ] linefeed | (windows or unix style end of line) |
space | ::= | x20 | (a space) |
tab | ::= | x09 | (a tab (\t )) |
linefeed | ::= | x0A | (a line feed (\n )) |
return | ::= | x0D | (a carriage return (\r )) |
char | ::= | unicode<!control | surrogate | bidi> | (includes space) |
unicode | ::= | x00..x10FFFF | |
control | ::= | x00..x1F | x7F | x80..9F | (C0, DEL, and C1) |
surrogate | ::= | xD800..xDFFF | |
bidi | ::= | x200E | x200F | x202A..x202E | x2066..x2069 | (bi-directional text control) |
Actual program code consists only of 7-bit ASCII characters while only comments and literals can contain extended unicode characters. As such, a lexical analyzer can directly process UTF-8 encoded input as a sequence of bytes without needing UTF-8 decoding or unicode character classification1. For security [2], some character ranges are excluded: the C0 and C1 control codes (except for space, tab, carriage return, and line feeds), surrogate characters, and bi-directional text control characters.
4.3. Layout
Just like programming languages like Haskell, Python, JavaScript, Scala, Go, etc., there is a layout rule which automatically adds braces and semicolons at appropriate places:
-
Any block that is indented is automatically wrapped with curly braces:
fun show-messages1( msgs : list<string> ) : console () msgs.foreach fn(msg) println(msg)
is elaborated to:
fun show-messages1( msgs : list<string> ) : console () { msgs.foreach fn(msg) { println(msg) } }
-
Any statements and declarations that are aligned in a block are terminated with semicolons, that is:
fun show-messages2( msgs : list<string> ) : console () msgs.foreach fn(msg) println(msg) println("--") println("done")
is fully elaborated to:
fun show-messages2( msgs : list<string> ) : console () { msgs.foreach fn(msg){ println(msg); println("--"); }; println("done"); }
-
Long expressions or declarations can still be indented without getting braces or semicolons if it is clear from the start- or previous token that the line continues an expression or declaration. Here is a contrived example:
fun eq2( x : int, y : int ) : io bool print("calc " ++ "equ" ++ "ality") val result = if(x == y) then True else False result
is elaborated to:
fun eq2( x : int, y : int ) : io bool { print("calc " ++ "equ" ++ "ality"); val result = if (x == y) then True else False; result }
Here the long string expression is indented but no braces or semicolons are inserted as the previous lines end with an operator (
++
). Similarly, in theif
expression no braces or semicolons are inserted as the indented lines start withthen
andelse
respectively. In the parameter declaration, the,
signifies the continuation.
More precisely, for long expressions and declarations, indented or aligned lines do not get braced or semicolons if:- The line starts with a clear expression or declaration start continuation token,
namely: an operator (including
.
),then
,else
,elif
, a closing brace ()
,>
,]
, or}
), or one of,
,->
,{
,=
,|
,::
,.
,:=
. - The previous line ends with a clear expression or declaration end continuation token,
namely an operator (including
.
), an open brace ((
,<
,[
, or{
), or,
.
- The line starts with a clear expression or declaration start continuation token,
namely: an operator (including
The layout algorithm is performed on the token stream in-between lexing and parsing, and is independent of both. In particular, there are no intricate dependencies with the parser (which leads to very complex layout rules, as is the case in languages like Haskell or JavaScript).
Moreover, in contrast to purely token-based layout rules (as in Scala or Go for example), the visual indentation in a Koka program corresponds directly to how the compiler interprets the statements. Many tricky layout examples in other programming languages are often based on a mismatch between the visual representation and how a compiler interprets the tokens – with Koka's layout rule such issues are largely avoided.
Of course, it is still allowed to explicitly use semicolons and braces, which can be used for example to put multiple statements on a single line:
fun equal-line( x : int, y : int ) : io bool { print("calculate equality"); (x == y) }
The layout algorithm also checks for invalid layouts where the layout would not visually correspond to how the compiler interprets the tokens. In particular, it is illegal to indent less than the layout context or to put comments into the indentation (because of tabs or potential unicode characters). For example, the program:
fun equal( x : int, y : int ) : io bool { print("calculate equality") result = if (x == y) then True // wrong: too little indentation /* wrong */ else False result }
is rejected. In order to facilitate code generation or source code
compression, compilers are also required to support a mode where the layout
rule is not applied and no braces or semicolons are inserted. A recognized command
line flag for that mode should be --nolayout
.
4.3.1. The layout algorithm
To define the layout algorithm formally, we first establish some terminology:
- A new line is started after every linefeed character.
- Any non-white token is called a lexeme, where a line without lexemes is called blank.
- The indentation of a lexeme is the column number of its first character on that line (starting at 1), and the indentation of a line is the indentation of the first lexeme on the line.
- A lexeme is an expression continuation if it is the first lexeme on a line, and the lexeme is a start continuation token, or the previous lexeme is an end continuation token (as defined in the previous section).
Because braces can be nested, we use a layout stack of strictly increasing indentations. The top indentation on the layout stack holds the layout indentation. The initial layout stack contains the single value 0 (which is never popped). We now proceed through the token stream where we perform the following operations in order: first brace insertion, then layout stack operations, and finally semicolon insertion:
-
Brace insertion: For each non-blank line, consider the first lexeme on the line. If the indentation is larger than the layout indentation, and the lexeme is not an expression continuation, then insert an open brace
{
before the lexeme. If the indention is less than the layout indentation, and the lexeme is not already a closing brace, insert a closing brace}
before the lexeme. -
Layout stack operations: If the previous lexeme was an open brace
{
or the start of the lexical token sequence, we push the indentation of the current lexeme on the layout stack. The pushed indentation must be larger than the previous layout indentation (unless the current lexeme is a closing brace). When a closing brace}
is encountered the top indentation is popped from the layout stack. -
Semicolon insertion: For each non-blank line, the indentation must be equal or larger to the layout indentation. If the indentation is equal to the layout indentation, and the first lexeme on the line is not an expression continuation, a semicolon is inserted before the lexeme. Also, a semicolon is always inserted before a closing brace
}
and before the end of the token sequence.
As defined, braces are inserted around any indented blocks, semicolons are inserted whenever statements or declarations are aligned (unless the lexeme happens to be a clear expression continuation). To simplify the grammar specification, a semicolon is also always inserted before a closing brace and the end of the source. This allows us to specify many grammar elements as ended by semicolons instead of separated by semicolons which is more difficult to specify for a LALR(1) grammar.
The layout can be implemented as a separate transformation on the lexical token stream (see the 50 line Haskell implementation in the Koka compiler), or directly as part of the lexer (see the Flex implementation)
4.3.2. Implementation
There is a full Flex (Lex) implementation of lexical analysis and the layout algorithm. Ultimately, the Flex implementation serves as the specification, and this document and the Flex implementation should always be in agreement.
4.4. Context-free syntax
The grammar specification starts with the non terminal module which draws its lexical tokens from lex where all whitespace tokens are implicitly ignored.
4.4.1. Modules
module[ lex ] | ::= | [ moduledecl ] modulebody | |
moduledecl | ::= | semis moduleid | |
moduleid | ::= | qvarid | varid | |
modulebody | ::= | { semis declarations } semis | |
| | semis declarations | ||
semis | ::= | { ; } | |
semi | ::= | ; semis |
4.4.2. Top level declarations
declarations | ::= | { importdecl } { fixitydecl } topdecls | |
importdecl | ::= | [ pub ] import [ moduleid = ] moduleid semi | |
fixitydecl | ::= | [ pub ] fixity integer identifier { , identifier } semi | |
fixity | ::= | infixl | infixr | infix | |
topdecls | ::= | { topdecl semi } | |
topdecl | ::= | [ pub ] puredecl | |
| | [ pub ] aliasdecl | ||
| | [ pub ] externdecl | ||
| | [ pubabstract ] typedecl | ||
| | [ pubabstract ] effectdecl | ||
pub | ::= | pub | |
pubabstract | ::= | pub | abstract |
4.4.3. Type Declarations
aliasdecl | ::= | alias typeid [ typeparams ] [ kannot ] = type | |
typedecl | ::= | typemod type typeid [ typeparams ] [ kannot ] [ typebody ] | |
| | structmod struct typeid [ typeparams ] [ kannot ] [ conparams ] | ||
typemod | ::= | co | rec | open | extend | structmod | |
structmod | ::= | value | reference | |
typeid | ::= | varid | [] | ( { , } ) | < > | < | > | |
typeparams | ::= | < [ tbinders ] > | |
tbinders | ::= | tbinder { , tbinder } | |
tbinder | ::= | varid [ kannot ] | |
typebody | ::= | { semis { constructor semi } } | |
constructor | ::= | [ pub ] [ con ] conid [ typeparams ] [ conparams ] | |
conparams | ::= | { semis { parameter semi } } |
4.4.4. Value and Function Declarations
puredecl | ::= | [ inlinemod ] val valdecl | |
| | [ inlinemod ] fun fundecl | ||
inlinemod | ::= | inline | noinline | |
valdecl | ::= | binder = blockexpr | |
binder | ::= | identifier [ : type ] | |
fundecl | ::= | funid funbody | |
funbody | ::= | funparam blockexpr | |
funparam | ::= | [ typeparams ] pparameters [ : tresult ] [ qualifier ] | |
funid | ::= | identifier | |
| | [ { , } ] | (indexing operator) | |
parameters | ::= | ( [ parameter { , parameter } ] ) | |
parameter | ::= | [ borrow ] paramid [ : type ] [ = expr ] | |
pparameters | ::= | ( [ pparameter { , pparameter } ] ) | (pattern matching parameters) |
pparameter | ::= | [ borrow ] pattern [ : type ] [ = expr ] | |
paramid | ::= | identifier | wildcard | |
borrow | ::= | ^ | (not allowed from conparams) |
qidentifier | ::= | qvarid | qidop | identifier | |
identifier | ::= | varid | idop | |
qoperator | ::= | op | |
qconstructor | ::= | conid | qconid |
4.4.5. Statements
block | ::= | { semis { statement semi } } | |
statement | ::= | decl | |
| | withstat | ||
| | withstat in expr | ||
| | returnexpr | ||
| | basicexpr | ||
decl | ::= | fun fundecl | |
| | val apattern = blockexpr | (local values can use a pattern binding) | |
| | var binder := blockexpr |
4.4.6. Expressions
blockexpr | ::= | expr | (block is interpreted as statements) |
expr | ::= | withexpr | |
block | (interpreted as fn(){...} ) | ||
returnexpr | |||
valexpr | |||
basicexpr | |||
basicexpr | ::= | ifexpr | |
| | fnexpr | ||
| | matchexpr | ||
| | handlerexpr | ||
| | opexpr | ||
ifexpr | ::= | if ntlexpr then blockexpr { elif } [ else blockexpr ] | |
| | if ntlexpr return expr | ||
elif | ::= | elif ntlexpr then blockexpr | |
matchexpr | ::= | match ntlexpr { semis { matchrule semi } } | |
returnexpr | ::= | return expr | |
fnexpr | ::= | fn funbody | (anonymous lambda expression) |
valexpr | ::= | val apattern = blockexpr in expr | |
withexpr | ::= | withstat in expr | |
withstat | ::= | with basicexpr | |
with binder <- basicexpr | |||
with [ override ] heff opclause | (with single operation) | ||
with binder <- heff opclause | (with named single operation) |
4.4.7. Operator expressions
For simplicity, we parse all operators as if they are left associative with the same precedence. We assume that a separate pass in the compiler will use the fixity declarations that are in scope to properly associate all operators in an expressions.
opexpr | ::= | prefixexpr { qoperator prefixexpr } | |
prefixexpr | ::= | { ! | ~ } appexpr | |
appexpr | ::= | appexpr ( [ arguments ] ) | (regular application) |
| | appexpr [ [ arguments ] ] | (index operation) | |
| | appexpr (fnexpr | block) | (trailing lambda expression) | |
| | appexpr . atom | ||
| | atom | ||
ntlexpr | ::= | ntlprefixexpr { qoperator ntlprefixexpr } | (non trailing lambda expression) |
ntlprefixexpr | ::= | { ! | ~ } ntlappexpr | |
ntlappexpr | ::= | ntlappexpr ( [ arguments ] ) | (regular application) |
| | ntlappexpr [ [ arguments ] ] | (index operation) | |
| | ntlappexpr . atom | ||
| | atom | ||
arguments | ::= | argument { , argument } | |
argument | ::= | [ identifier = ] expr |
4.4.8. Atomic expressions
atom | ::= | qidentifier | |
| | qconstructor | ||
| | literal | ||
| | mask | ||
| | ( ) | (unit) | |
| | ( annexpr ) | (parenthesized expression) | |
| | ( annexprs ) | (tuple expression) | |
| | [ [ annexpr { , annexprs } [ , ] ] ] | (list expression) | |
literal | ::= | natural | float | charlit | stringlit | |
mask | ::= | mask [ behind ] < tbasic > | |
annexprs | ::= | annexpr { , annexpr } | |
annexpr | ::= | expr [ : typescheme ] |
4.4.9. Matching
matchrule | ::= | patterns [ | expr ] -> blockexpr | |
apattern | ::= | pattern [ typescheme ] | |
pattern | ::= | identifier | |
| | identifier as apattern | (named pattern) | |
| | qconstructor [( [ patargs ] ) ] | ||
| | ( [ apatterns ] ) | (unit, parenthesized pattern, tuple pattern) | |
| | [ [ apatterns ] ] | (list pattern) | |
| | literal | ||
| | wildcard | ||
patterns | ::= | pattern { , pattern } | |
apatterns | ::= | apattern { , apattern } | |
patargs | ::= | patarg { , patarg } | |
patarg | ::= | [ identifier = ] apattern | (possibly named parameter) |
4.4.10. Effect Declarations
effectdecl | ::= | [ named ] effectmod effect varid [ typeparams ] [ kannot ] [ opdecls ] | |
| | [ named ] effectmod effect [ typeparams ] [ kannot ] opdecl | ||
| | named effectmod effect varid [ typeparams ] [ kannot ] in type [ opdecls ] | ||
effectmod | ::= | [ linear ] [ rec ] | |
named | ::= | named | |
opdecls | ::= | { semis { opdecl semi } } | |
opdecl | ::= | [ pub ] val identifier [ typeparams ] : tatom | |
| | [ pub ] (fun | ctl ) identifier [ typeparams ] parameters : tatom |
4.4.11. Handler Expressions
handlerexpr | ::= | [ override ] handler heff opclauses | |
| | [ override ] handle heff ( expr ) opclauses | ||
| | named handler heff opclauses | ||
| | named handle heff ( expr ) opclauses | ||
heff | ::= | [ < tbasic > ] | |
opclauses | ::= | { semis { opclausex semi } } | |
opclausex | | | opclause | |
| | finally blockexpr | ||
| | initially ( oparg ) blockexpr | ||
opclause | ::= | val qidentifier [ type ] = blockexpr | |
| | fun qidentifier opargs blockexpr | ||
| | [ ctlmod ]ctl qidentifier opargs blockexpr | ||
| | return ( oparg ) blockexpr | ||
ctlmod | ::= | final | raw | |
opargs | ::= | ( [ oparg { , oparg } ] ) | |
oparg | ::= | paramid [ : type ] |
4.4.12. Type schemes
typescheme | ::= | somes foralls tarrow [ qualifier ] | ||
type | ::= | foralls tarrow [ qualifier ] | ||
foralls | ::= | [ forall typeparams ] | ||
some | ::= | [ some typeparams ] | ||
qualifier | ::= | with ( predicates ) | ||
predicates | ::= | predicate { , predicate } | ||
predicate | ::= | typeapp | (interface) |
4.4.13. Types
tarrow | ::= | tatom [ -> tresult ] | |
tresult | ::= | tatom [ tbasic ] | |
tatom | ::= | tbasic | |
| | < anntype { , anntype } [ | tatom ] > | ||
| | < > | ||
tbasic | ::= | typeapp | |
| | ( ) | (unit type) | |
| | ( tparam ) | (parenthesized type or type parameter) | |
| | ( tparam { , tparam } ) | (tuple type or parameters) | |
| | [ anntype ] | (list type) | |
typeapp | ::= | typecon [ < anntype { , anntype } > ] | |
typecon | ::= | varid | qvarid | |
| | wildcard | ||
| | ( , { , } ) | (tuple constructor) | |
| | [ ] | (list constructor) | |
| | ( -> ) | (function constructor) | |
tparam | ::= | [ varid : ] anntype | |
anntype | ::= | type [ kannot ] |
4.4.14. Kinds
kannot | ::= | :: kind | |
kind | ::= | ( kind { , kind } ) -> kind | |
| | katom -> kind | ||
| | katom | ||
katom | ::= | V | (value type) |
| | X | (effect type) | |
| | E | (effect row) | |
| | H | (heap type) | |
| | P | (predicate type) | |
| | S | (scope type) | |
| | HX | (handled effect type) | |
| | HX1 | (handled linear effect type) |
4.4.15. Implementation
As a companion to the Flex lexical implementation, there is a full Bison(Yacc) LALR(1) implementation available. Again, the Bison parser functions as the specification of the grammar and this document should always be in agreement with that implementation.
References
Appendix
A. Full grammar specification
A.1. Lexical syntax
lex | ::= | lexeme | whitespace | |
lexeme | ::= | conid | qconid | |
| | varid | qvarid | ||
| | op | opid | qopid | wildcard | ||
| | integer | float | stringlit | charlit | ||
| | reserved | opreserved | ||
| | special | ||
anyid | ::= | varid | qvarid | opid | qopid | conid | qconid | |
qconid | ::= | modulepath conid | |
qvarid | ::= | modulepath lowerid | |
modulepath | ::= | lowerid / { lowerid / } | |
conid | ::= | upperid | |
varid | ::= | lowerid<!reserved> | |
lowerid | ::= | lower idtail | |
upperid | ::= | upper idtail | |
wildcard | ::= | _ idtail | |
typevarid | ::= | letter { digit } | |
idtail | ::= | { idchar } [ idfinal ] | |
idchar | ::= | letter | digit | _ | - | |
idfinal | ::= | { ' } | |
reserved | ::= | infix | infixr | infixl | |
| | module | import | as | ||
| | pub | abstract | ||
| | type | struct | alias | effect | con | ||
| | forall | exists | some | ||
| | fun | fn | val | var | extern | ||
| | if | then | else | elif | ||
| | match | return | with | in | ||
| | handle | handler | mask | ||
| | ctl | final | raw | ||
| | override | named | ||
| | interface | break | continue | unsafe | (future reserved words) | |
specialid | ::= | co | rec | open | extend | behind | |
| | linear | value | reference | ||
| | inline | noinline | initially | finally | ||
| | js | c | cs | file | ||
qopid | ::= | modulepath opid | |
opid | ::= | ( symbols ) | |
op | ::= | symbols<!opreserved | optype> | || | |
symbols | ::= | symbol { symbol }| / | |
symbol | ::= | $ | % | & | * | + | |
| | ~ | ! | \ | ^ | # | ||
| | = | . | : | - | ? | ||
| | anglebar | ||
anglebar | ::= | < | > | | | |
opreserved | ::= | = | . | : | -> | |
optype | ::= | anglebar anglebar { anglebar } | |
special | ::= | { | } | ( | ) | [ | ] | | | ; | , | |
charlit | ::= | ' (char<!' | \ > | escape) ' | |
stringlit | ::= | " { char<!" | \ > | escape } " | |
| | r { # }n" rawcharsn" { # }n | (n >= 0) | |
rawcharsn | ::= | { anychar }<!{ anychar } " { # }n { anychar }> | |
escape | ::= | \ ( charesc | hexesc ) | |
charesc | ::= | n | r | t | \ | " | ' | |
hexesc | ::= | x { hexdigit }2 | u { hexdigit }4 | U { hexdigit }6 | |
float | ::= | [ - ] (decfloat | hexfloat) | |
decfloat | ::= | decimal (. digits [ decexp ] | decexp) | |
decexp | ::= | (e | E ) exponent | |
hexfloat | ::= | hexadecimal (. hexdigits [ hexexp ] | hexexp) | |
hexexp | ::= | (p | P ) exponent | |
exponent | ::= | [ - | + ] digit { digit } | |
integer | ::= | [ - ] (decimal | hexadecimal) | |
decimal | ::= | 0 | posdigit [ [ _ ] digits ] | |
hexadecimal | ::= | 0 (x | X ) hexdigits | |
digits | ::= | digit { digit } { _ digit { digit } } | |
hexdigits | ::= | hexdigit { hexdigit } { _ hexdigit { hexdigit } } | |
whitespace | ::= | white { white } | newline | |
white | ::= | space | |
| | linecomment | blockcomment | ||
| | linedirective | ||
linecomment | ::= | // { char | tab } | |
linedirective | ::= | newline # { char | tab } | |
blockcomment | ::= | /* blockpart { blockcomment blockpart } */ | (allows nested comments) |
blockpart | ::= | { anychar }<!{ anychar } (/* |*/ ) { anychar }> | |
letter | ::= | upper | lower | |
upper | ::= | A..Z | (i.e. x41..x5A ) |
lower | ::= | a..z | (i.e. x61..x7A ) |
digit | ::= | 0..9 | (i.e. x30..x39 ) |
posdigit | ::= | 1..9 | |
hexdigit | ::= | a..f | A..F | digit | |
anychar | ::= | char | tab | newline | (in comments and raw strings) |
newline | ::= | [ return ] linefeed | (windows or unix style end of line) |
space | ::= | x20 | (a space) |
tab | ::= | x09 | (a tab (\t )) |
linefeed | ::= | x0A | (a line feed (\n )) |
return | ::= | x0D | (a carriage return (\r )) |
char | ::= | unicode<!control | surrogate | bidi> | (includes space) |
unicode | ::= | x00..x10FFFF | |
control | ::= | x00..x1F | x7F | x80..9F | (C0, DEL, and C1) |
surrogate | ::= | xD800..xDFFF | |
bidi | ::= | x200E | x200F | x202A..x202E | x2066..x2069 | (bi-directional text control) |
char | ::= | unicode<!control | surrogate | bidi> | |
unicode | ::= | x00..x7F | (ASCII) |
| | (xC2..xDF ) cont | ||
| | xE0 (xA0..xBF ) cont | (exclude overlong encodings) | |
| | (xE1..xEF ) cont cont | ||
| | xF0 (x90..xBF ) cont cont | (exclude overlong encodings) | |
| | (xF1..xF3 ) cont cont cont | ||
| | xF4 (x80..x8F ) cont cont | (no codepoint larger than x10FFFF ) | |
cont | ::= | x80..xBF | |
surrogate | ::= | xED (xA0..xBF ) cont | |
control | ::= | x00..x1F | |
| | x7F | ||
| | xC2 (x80..x9F ) | ||
bidi | ::= | xE2 0x80 (0x8E..0x8F ) | (left-to-right mark (u200E ) and right-to-left mark (u200F )) |
| | xE2 0x80 (0xAA..0xAE ) | (left-to-right embedding (u202A ) up to right-to-left override (u202E )) | |
| | xE2 0x81 (0xA6..0xA9 ) | (left-to-right isolate (u2066 ) up to pop directional isolate (u2069 )) |
A.2. Context-free syntax
module[ lex ] | ::= | [ moduledecl ] modulebody | ||
moduledecl | ::= | semis moduleid | ||
moduleid | ::= | qvarid | varid | ||
modulebody | ::= | { semis declarations } semis | ||
| | semis declarations | |||
semis | ::= | { ; } | ||
semi | ::= | ; semis | ||
declarations | ::= | { importdecl } { fixitydecl } topdecls | ||
importdecl | ::= | [ pub ] import [ moduleid = ] moduleid semi | ||
fixitydecl | ::= | [ pub ] fixity integer identifier { , identifier } semi | ||
fixity | ::= | infixl | infixr | infix | ||
topdecls | ::= | { topdecl semi } | ||
topdecl | ::= | [ pub ] puredecl | ||
| | [ pub ] aliasdecl | |||
| | [ pub ] externdecl | |||
| | [ pubabstract ] typedecl | |||
| | [ pubabstract ] effectdecl | |||
pub | ::= | pub | ||
pubabstract | ::= | pub | abstract | ||
aliasdecl | ::= | alias typeid [ typeparams ] [ kannot ] = type | ||
typedecl | ::= | typemod type typeid [ typeparams ] [ kannot ] [ typebody ] | ||
| | structmod struct typeid [ typeparams ] [ kannot ] [ conparams ] | |||
typemod | ::= | co | rec | open | extend | structmod | ||
structmod | ::= | value | reference | ||
typeid | ::= | varid | [] | ( { , } ) | < > | < | > | ||
typeparams | ::= | < [ tbinders ] > | ||
tbinders | ::= | tbinder { , tbinder } | ||
tbinder | ::= | varid [ kannot ] | ||
typebody | ::= | { semis { constructor semi } } | ||
constructor | ::= | [ pub ] [ con ] conid [ typeparams ] [ conparams ] | ||
conparams | ::= | { semis { parameter semi } } | ||
puredecl | ::= | [ inlinemod ] val valdecl | ||
| | [ inlinemod ] fun fundecl | |||
inlinemod | ::= | inline | noinline | ||
valdecl | ::= | binder = blockexpr | ||
binder | ::= | identifier [ : type ] | ||
fundecl | ::= | funid funbody | ||
funbody | ::= | funparam blockexpr | ||
funparam | ::= | [ typeparams ] pparameters [ : tresult ] [ qualifier ] | ||
funid | ::= | identifier | ||
| | [ { , } ] | (indexing operator) | ||
parameters | ::= | ( [ parameter { , parameter } ] ) | ||
parameter | ::= | [ borrow ] paramid [ : type ] [ = expr ] | ||
pparameters | ::= | ( [ pparameter { , pparameter } ] ) | (pattern matching parameters) | |
pparameter | ::= | [ borrow ] pattern [ : type ] [ = expr ] | ||
paramid | ::= | identifier | wildcard | ||
borrow | ::= | ^ | (not allowed from conparams) | |
qidentifier | ::= | qvarid | qidop | identifier | ||
identifier | ::= | varid | idop | ||
qoperator | ::= | op | ||
qconstructor | ::= | conid | qconid | ||
block | ::= | { semis { statement semi } } | ||
statement | ::= | decl | ||
| | withstat | |||
| | withstat in expr | |||
| | returnexpr | |||
| | basicexpr | |||
decl | ::= | fun fundecl | ||
| | val apattern = blockexpr | (local values can use a pattern binding) | ||
| | var binder := blockexpr | |||
blockexpr | ::= | expr | (block is interpreted as statements) | |
expr | ::= | withexpr | ||
block | (interpreted as fn(){...} ) | |||
returnexpr | ||||
valexpr | ||||
basicexpr | ||||
basicexpr | ::= | ifexpr | ||
| | fnexpr | |||
| | matchexpr | |||
| | handlerexpr | |||
| | opexpr | |||
ifexpr | ::= | if ntlexpr then blockexpr { elif } [ else blockexpr ] | ||
| | if ntlexpr return expr | |||
elif | ::= | elif ntlexpr then blockexpr | ||
matchexpr | ::= | match ntlexpr { semis { matchrule semi } } | ||
returnexpr | ::= | return expr | ||
fnexpr | ::= | fn funbody | (anonymous lambda expression) | |
valexpr | ::= | val apattern = blockexpr in expr | ||
withexpr | ::= | withstat in expr | ||
withstat | ::= | with basicexpr | ||
with binder <- basicexpr | ||||
with [ override ] heff opclause | (with single operation) | |||
with binder <- heff opclause | (with named single operation) | |||
opexpr | ::= | prefixexpr { qoperator prefixexpr } | ||
prefixexpr | ::= | { ! | ~ } appexpr | ||
appexpr | ::= | appexpr ( [ arguments ] ) | (regular application) | |
| | appexpr [ [ arguments ] ] | (index operation) | ||
| | appexpr (fnexpr | block) | (trailing lambda expression) | ||
| | appexpr . atom | |||
| | atom | |||
ntlexpr | ::= | ntlprefixexpr { qoperator ntlprefixexpr } | (non trailing lambda expression) | |
ntlprefixexpr | ::= | { ! | ~ } ntlappexpr | ||
ntlappexpr | ::= | ntlappexpr ( [ arguments ] ) | (regular application) | |
| | ntlappexpr [ [ arguments ] ] | (index operation) | ||
| | ntlappexpr . atom | |||
| | atom | |||
arguments | ::= | argument { , argument } | ||
argument | ::= | [ identifier = ] expr | ||
atom | ::= | qidentifier | ||
| | qconstructor | |||
| | literal | |||
| | mask | |||
| | ( ) | (unit) | ||
| | ( annexpr ) | (parenthesized expression) | ||
| | ( annexprs ) | (tuple expression) | ||
| | [ [ annexpr { , annexprs } [ , ] ] ] | (list expression) | ||
literal | ::= | natural | float | charlit | stringlit | ||
mask | ::= | mask [ behind ] < tbasic > | ||
annexprs | ::= | annexpr { , annexpr } | ||
annexpr | ::= | expr [ : typescheme ] | ||
matchrule | ::= | patterns [ | expr ] -> blockexpr | ||
apattern | ::= | pattern [ typescheme ] | ||
pattern | ::= | identifier | ||
| | identifier as apattern | (named pattern) | ||
| | qconstructor [( [ patargs ] ) ] | |||
| | ( [ apatterns ] ) | (unit, parenthesized pattern, tuple pattern) | ||
| | [ [ apatterns ] ] | (list pattern) | ||
| | literal | |||
| | wildcard | |||
patterns | ::= | pattern { , pattern } | ||
apatterns | ::= | apattern { , apattern } | ||
patargs | ::= | patarg { , patarg } | ||
patarg | ::= | [ identifier = ] apattern | (possibly named parameter) | |
effectdecl | ::= | [ named ] effectmod effect varid [ typeparams ] [ kannot ] [ opdecls ] | ||
| | [ named ] effectmod effect [ typeparams ] [ kannot ] opdecl | |||
| | named effectmod effect varid [ typeparams ] [ kannot ] in type [ opdecls ] | |||
effectmod | ::= | [ linear ] [ rec ] | ||
named | ::= | named | ||
opdecls | ::= | { semis { opdecl semi } } | ||
opdecl | ::= | [ pub ] val identifier [ typeparams ] : tatom | ||
| | [ pub ] (fun | ctl ) identifier [ typeparams ] parameters : tatom | |||
handlerexpr | ::= | [ override ] handler heff opclauses | ||
| | [ override ] handle heff ( expr ) opclauses | |||
| | named handler heff opclauses | |||
| | named handle heff ( expr ) opclauses | |||
heff | ::= | [ < tbasic > ] | ||
opclauses | ::= | { semis { opclausex semi } } | ||
opclausex | | | opclause | ||
| | finally blockexpr | |||
| | initially ( oparg ) blockexpr | |||
opclause | ::= | val qidentifier [ type ] = blockexpr | ||
| | fun qidentifier opargs blockexpr | |||
| | [ ctlmod ]ctl qidentifier opargs blockexpr | |||
| | return ( oparg ) blockexpr | |||
ctlmod | ::= | final | raw | ||
opargs | ::= | ( [ oparg { , oparg } ] ) | ||
oparg | ::= | paramid [ : type ] | ||
typescheme | ::= | somes foralls tarrow [ qualifier ] | ||
type | ::= | foralls tarrow [ qualifier ] | ||
foralls | ::= | [ forall typeparams ] | ||
some | ::= | [ some typeparams ] | ||
qualifier | ::= | with ( predicates ) | ||
predicates | ::= | predicate { , predicate } | ||
predicate | ::= | typeapp | (interface) | |
tarrow | ::= | tatom [ -> tresult ] | ||
tresult | ::= | tatom [ tbasic ] | ||
tatom | ::= | tbasic | ||
| | < anntype { , anntype } [ | tatom ] > | |||
| | < > | |||
tbasic | ::= | typeapp | ||
| | ( ) | (unit type) | ||
| | ( tparam ) | (parenthesized type or type parameter) | ||
| | ( tparam { , tparam } ) | (tuple type or parameters) | ||
| | [ anntype ] | (list type) | ||
typeapp | ::= | typecon [ < anntype { , anntype } > ] | ||
typecon | ::= | varid | qvarid | ||
| | wildcard | |||
| | ( , { , } ) | (tuple constructor) | ||
| | [ ] | (list constructor) | ||
| | ( -> ) | (function constructor) | ||
tparam | ::= | [ varid : ] anntype | ||
anntype | ::= | type [ kannot ] | ||
kannot | ::= | :: kind | ||
kind | ::= | ( kind { , kind } ) -> kind | ||
| | katom -> kind | |||
| | katom | |||
katom | ::= | V | (value type) | |
| | X | (effect type) | ||
| | E | (effect row) | ||
| | H | (heap type) | ||
| | P | (predicate type) | ||
| | S | (scope type) | ||
| | HX | (handled effect type) | ||
| | HX1 | (handled linear effect type) |
1.This is used for example in the Flex implementation. In particular, we only need to adapt the char definition:
char | ::= | unicode<!control | surrogate | bidi> | |
unicode | ::= | x00..x7F | (ASCII) |
| | (xC2..xDF ) cont | ||
| | xE0 (xA0..xBF ) cont | (exclude overlong encodings) | |
| | (xE1..xEF ) cont cont | ||
| | xF0 (x90..xBF ) cont cont | (exclude overlong encodings) | |
| | (xF1..xF3 ) cont cont cont | ||
| | xF4 (x80..x8F ) cont cont | (no codepoint larger than x10FFFF ) | |
cont | ::= | x80..xBF | |
surrogate | ::= | xED (xA0..xBF ) cont | |
control | ::= | x00..x1F | |
| | x7F | ||
| | xC2 (x80..x9F ) | ||
bidi | ::= | xE2 0x80 (0x8E..0x8F ) | (left-to-right mark (u200E ) and right-to-left mark (u200F )) |
| | xE2 0x80 (0xAA..0xAE ) | (left-to-right embedding (u202A ) up to right-to-left override (u202E )) | |
| | xE2 0x81 (0xA6..0xA9 ) | (left-to-right isolate (u2066 ) up to pop directional isolate (u2069 )) |
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK