11 April 2017

Lately in Kitten #2: Closures, Boxing, Permissions, and Assumptions

Recent work on Kitten has been focused on design rather than implementation, so those of you who’ve been following the project might have noticed an apparent lull in development activity. Having figured out a few issues, I’ll be getting back to the code this week. Here’s an overview of the stuff I’ve been thinking about.

Closure Conversion

Closure conversion and lambda lifting are two operations central to the implementation of a compiler for a functional programming language. The goal is to convert anonymous functions that implicitly capture variables from the enclosing scope into closures that explicitly indicate which variables they capture, as well as lift inline anonymous functions into top-level definitions.

For instance, the expression -> x; { x } takes a value from the stack and wraps it in a function that simply returns that value when called. The quotation { x } captures the local variable x from the enclosing scope. Currently, the compiler transforms this into an explicit “capture” expression, represented in pseudo-Kitten as:

$(local.0){ closure.0 }

When execution reaches this expression, the local variable (local.0) is copied into the closure; when the closure is executed, closure.0 refers to this captured value.

However, the implementation of this process is overly complex, and makes a needless distinction between closure variables and local variables. I’ve figured out a much nicer (and more traditional) implementation that represents closure variables simply as extra parameters passed on top of the stack when a closure is invoked. Again in pseudo-Kitten, this yields:

local.0 { -> closure.0; closure.0 } new.closure.1

The advantage here is that it makes it very simple to produce unboxed closures: the new.closure.n internal instruction simply coerces the top of the stack—from a series of n variables followed by a function pointer into a single opaque closure value. When the closure is invoked with call, the coercion is performed in reverse, yielding the captured values as ordinary arguments on the stack beneath a function pointer, which is then called just like any other function.

Furthermore, this implementation of unboxed closures maps very nicely to existential types. Briefly, we can represent a closure as an unboxed pair of the captured values and a function pointer, using existential quantification to hide the actual types of the captured values. With this change, you’ll be able to pass closures around as ordinary stack-allocated values. Because the closure type is abstract, it doesn’t unify with any other type—as in Rust, internally each closure will get a unique anonymous type.

Boxing

After unboxed closures are implemented, you will only need to pay the cost of boxing a closure when you want to store it in another data structure, and this will be explicit. For example, currently you might write a list of functions of type List<(Int32 -> Int32)> as:

[\(+ x), \(* x), \(/ x)]

Once unboxed closures are implemented, you will need to write:

[\(+ x) box, \(* x) box, \(/ x) box]

This is essentially the same as std::function<int32_t(int32_t)> in C++, which performs boxing internally.

Likewise, the list literal notation [1, 2, 3] will be changed to produce an unboxed array of known size, with a type like Array<Int32, 3>—the same as the type int32_t[3] in C or C++. You will only need to allocate memory when you want this size to be dynamic: [1, 2, 3] box will have the type List<Int32>, equivalent to std::vector<int32_t> in C++.

In the future, I may introduce more concise notations such as {| … |} and [| … |] for boxed closures and lists, respectively. However, the box word has the advantage of being more searchable.

In addition, the box trait will require the +Alloc permission, which will be implicitly granted to all definitions by default. You will be able to waive the permission to allocate by adding -Alloc to a type signature, and this should be very useful for statically guaranteeing that critical code paths allocate no heap memory.

Permissions

While adding the implicit +Alloc permission, I plan to make other permissions granted by default. For example, +Fail is the permission to abort the program with an assertion failure, and this will be used by default for overflow-checked arithmetic operations. If you waive this permission by adding -Fail to the signature of a definition, then it will be statically guaranteed not to raise any assertion failures.

Because these implicit permissions will be backward-compatible, we will be able to add additional permissions in the future for easy, fine-grained control over low-level performance and safety details. For example, if we make it possible to give type parameters to a permission, then we might add such permissions as +Read<R> and +Write<R> for reasoning about accesses to some region of memory R; with some metadata, the compiler will be able to tell that reads and writes to different memory regions are commutative, enabling it to safely reorder instructions to produce more instruction-level parallelism. This, in turn, should make it easier to write correct high-performance lock-free code.

Furthermore, I plan to extend the permission system to allow permissions to provide data; this could be used to implement such things as dynamic scoping, typeclasses, and Scala-style implicits. Because all terms denote functions, there’s no material difference between an effect (an action the code is allowed to take on the environment) and a coeffect (a constraint on the environment in which some code is allowed to run). So with this small change, Kitten’s permission system will be able to neatly represent both.

As with the free monad approach to extensible effects, these changes will enable the same permission to be executed with different handlers. So you will be able to write a single definition with a user-defined permission (such as +Database) then interpret the same code in different ways (such as accessing a mock, test, or production database) to enable more code reuse and easier testing.

Assumptions

Finally, I’ve been working on figuring out the details of an additional extension to the permission system, which I term assumptions. Whereas permissions are attached to function types to describe their side effects and constraints, assumptions will be attached to values to describe simple invariants. I haven’t quite figured out the syntax for this, but for example, the type List<Int32> @Sorted @NonEmpty might describe a list of integers that is both sorted and non-empty.

The compiler should be able to use these assumptions to produce better error messages, as well as better compiled code, e.g., by eliminating redundant checks. I envision an unshared mutable reference type Owned<T> and a shared reference-counted variant Shared<T>. Assumptions would let you safely claim ownership of a shared reference with a function such as claim with the type Shared<T> @Unique -> Owned<T>.

I can also see this being used to constrain the ranges of numeric types, such as Int32 @Range<0, 1024>.

Like permissions, assumptions will have a subtyping property. It’s fine to pass a List<Int32> @Sorted to a function expecting any old List<Int32>, but not the other way around: you can’t pass any old List<Int32> to a function expecting a List<Int32> @Sorted.

You will be able to imbue a value with assumptions safely using a static or dynamic check:

assumption Sorted<T> (List<T>):
  dup tail \<= zip_with and

Or unsafely using a coercion such as imbue (@Sorted). Various standard functions will produce values with standard assumptions; for example, sort may have the type List<T> -> List<T> @Sorted.

Thinking Forth

As always, contributors are more than welcome—a programming language is a massive undertaking, and I am only one person. Even if you have no experience with compilers, concatenative programming, or Haskell (in which Kitten’s compiler is written), there are plenty of tasks available for a motivated contributor of any skill level. Feel free to look at the current issues, email me directly, or write to the Kitteneers mailing list and I’ll gladly help you help me!

15 February 2017

Lately in Kitten

At the turn of the new year, I resolved to work on Kitten every week of 2017, and so far it’s been going pretty well. With the success of This Week in Rust, I thought I’d make a point to continue documenting my thought processes when designing and implementing language features, rather than leave them in the dark in my notebooks. I hope it’ll be enlightening to fellow language designers, and enjoyable for anyone with an interest in programming languages.

In the News

Someone posted about the language on Hacker News, and it led to some interesting discussions and multiple offers from potential contributors. Matz even tweeted about it:

It was yet another reminder that I need to update the website, since I haven’t done so for about 2 years, and many details of the language have changed significantly in that time. I swear I’ll get around to it eventually. :P

Typechecker Fixes

“Instance checking” refers to verifying an inferred type against a type signature, by checking that the type is at least as polymorphic as the signature. We do this by skolemising the type (replacing all its type variables with fresh unique type constants) and then running a subsumption check. This is largely the same as unification, but it accounts for the fact that a function type is contravariant in its input type: if $mathjax((a \to b) \le (c \to d))$, then $mathjax(b \le d)$ (covariant), but $mathjax(c \le a)$ (contravariant). This is also known as the “generic instance” relation in System F, and is usually spelled $mathjax(\sqsubseteq)$.

Somewhere along the line, instance checking got broken; that meant that the inferred type of a function didn’t have to match its declared type, meaning a total violation of type safety. Fortunately, a “total violation” is normal when you have a typechecker bug—unsoundness can sneak in at any time. Fixing this required some simple tests and logging to track down the issue, which was a trivial case of swapping two variables: the declared and inferred types. D’oh!

Syntactic Additions and Fixes

Kitten uses \name as sugar for { name } when pushing a function to the stack. James Sully noticed that this notation can be generalised to accept any term. This resulted in a nice little notational improvement when passing a partially applied operator to a higher-order function:

// Before
[1, 2, 3] { (* 5) } map

// After
[1, 2, 3] \(* 5) map

The new as (Type, Type, …) operator lets you write the identity function specialised to a certain type, which is useful for documenting tricky code, and avoiding ambiguity in type inference:

// Documenting types
1 2.0 "three" as (Int32, Float64, List<Char>)

// Error: what type do we read the string as before showing it?
"1" read show

// Unambiguous
"1" read as (Int32) show

// Potential error: not clear what type we cast to
1.0 cast

// Unambiguous
1.0 cast as (Float32)

Finally, I’ve enabled support for Unicode versions of many common Kitten symbols; for example, you can write instead of ->. In the future, we might put this behind a feature flag, or at least warn about inconsistent mixing of Unicode symbols and their ASCII approximations.

Simplifying Internals

The compiler is essentially a database server. It takes requests in the form of program fragments from source files or interactive input, then tries to tokenise, parse, and typecheck them. Then it reports any errors, or commits the result of successful compilation to the “dictionary”, a key-value store of all the definitions in the program. Generating an executable is just one way to serialise this dictionary—it will also be used to generate documentation, TAGS files, syntax highlighting information, source maps, and so on.

However, the design of the dictionary grew organically as I needed different features, so it’s essentially just a hash table that the compiler reads and updates directly. I’ve been working on replacing it with a simpler design with only two API endpoints: query and update. In the future, this should make it easier to run the compiler as a language server, to ease editor integration.

I’ve also been trying to simplify the internal representation of program terms (e.g. #168), to make things easier to compile. Kitten’s compiler is somewhat unusual in that the same term representation is used throughout the compilation process—from parsing and desugaring, through typechecking, to code generation. This is feasible because Kitten is essentially just a high-level stack-based assembly language, with a healthy dose of static typing and syntactic sugar.

Some of these refactorings have exposed bugs that will need to be addressed before they can be landed. For example, when a closure captures a variable with a polymorphic type, it’s lifted into a generic definition; the compiler fails to generate an instantiation of it, leading to what amounts to a linking error later on.

I’m going to try to do more of this development in the open, via GitHub PRs, to give potential contributors more visibility into what I’m up to.

Code Generation

The old compiler generated C99 code; the new compiler is moving away from C as a target language, opting instead to generate assembly. This will give us more control over calling conventions and data representations. There is a half-baked backend for our first target architecture, x86-64, which I intend to flesh out in the coming weeks.

Working on the backend should give me reason to polish some of the lower-level bits of Kitten, such as +Unsafe code, copy constructors and destructors, and unboxed closures. The generated executables will still depend on libc for now, but it’s my long-standing goal to eventually require zero nontrivial runtime support.

Removing the Old Compiler

Kitten has been rewritten several times over the course of its 5-year history, as I’ve developed a vision of what exactly I’m trying to build. I’ve been referring to the latest rewrite as “the new compiler” for over a year now, and it’s high time to remove the “old” one. There are only about a dozen minor features and nice-to-haves present in the old compiler that haven’t yet been ported; soon I’ll have the pleasure of red-diffing over 7000 lines of code. :)

Looking Forward

It’s been a long journey already, and there’s so much left to do, but I think we’re on target for a release late this year. I’m always happy to welcome contributors—there’s room for all kinds of people, including compiler developers, testers, documentation writers, and UI designer/developers. If there’s something you’re interested in working on, I’ll gladly help you get up to speed.

03 September 2016

Arithmetic Types in Kitten

In order to motivate myself to work on my programming language, Kitten, I’ve decided to publish my notes on various aspects of the design as I figure them out. This is partly to solicit comments and drum up interest in the project, but mostly an exercise in ensuring my ideas are solid enough to explain clearly.

In these articles, “Kitten” will refer to the “ideal” language and the new work-in-progress compiler that approximates it.

Explicitness

In non-generic definitions, it should always be evident from the code what type of data you’re using. As such, the type of integer and floating-point literals can be specified with a type suffix: i8, i16, …, u8, …, f32, f64; the unadorned integer literal 1 is equivalent to 1i32, and the float literal 1.0 is equivalent to 1.0f64. I think signed 32-bit integers and 64-bit floats are reasonable defaults for common use cases.

In the future, we could relax this by allowing the suffix to be inferred, or by allowing the default suffix to be locally changed.

// Currently fails to typecheck; 3 could be deduced to mean 3i64.
(2i64 + 3)

// Currently not valid syntax; 2 and 3 could be defaulted to Int64.
assume (Int64)
(2 + 3)

Uniformity

Because overloading interacts rather poorly with type inference, it would be easiest for me to provide different operators for different types, such as + for integer addition and +. for floating-point addition, as in OCaml. I did this in the old compiler, and it wasn’t great for usability. I solve this through the use of traits—for each operator there is a single generic definition, with instances provided for all the common arithmetic types.

trait + <T> (T, T -> T)

instance + (Int32, Int32 -> Int32):
  _::kitten::add_int32

instance + (Float64, Float64 -> Float64):
  _::kitten::add_float64

Traits work like template specialisations in C++: if + is inferred to have the type Int32, Int32 -> Int32 at a given call site, then the compiler will emit a call to that particular instance; no generic code makes it into the final executable. And if no such instance exists, you’ll get a straightforward compilation error.

These traits will generally need to be inlined for performance reasons. Luckily, Kitten’s metadata system allows us to specify this easily enough:

about +:
  docs: """
    Adds two values of some type, producing a result of the same type.
    """
  inline: always

The about notation is a dumping ground for all metadata: documentation, examples, tests, operator precedence information, and optimization hints. The idea is that it’s better to have all of this information in a single structured format than to use some combination of magical comments, pragmas, attributes, and otherwise special syntax.

Safety

We want arithmetic operations to be safe, meaning that failure modes such as implicit overflow and trapping should be opt-in.

Therefore, we should provide an arbitrary-precision integer type (Int) as a sane default, as well as an array of signed and unsigned fixed-precision integer types in common sizes: Int8, Int16, Int32, Int64, and their UIntN counterparts.

Kitten has a permission system for reasoning about the permitted side effects of functions. Operations on fixed-precision integers (both signed and unsigned) are checked by default, requiring the +Fail permission to allow them to raise assertion failures in case of overflow.

instance + (Int32, Int32 -> Int32 +Fail) { … }

However, if most arithmetic operations can fail, then +Fail will proliferate through all the type signatures of calling code, which is a real drag, and might encourage people to use unchecked types more liberally than they should. We can solve this in two ways:

  • Make +Fail implicit in type signatures, and introduce a notation -Fail for removing it. This speaks to a more general notion of implicit permissions, which I think would be really valuable: it would let us introduce new implicit permissions in a backward-compatible way, allowing new code to opt out. For example, if we implicitly grant +Alloc to all functions, then they’re permitted to allocate memory on the heap—lists, closures, bignums, and so on. But for performance-critical code, you may want a static guarantee that your code performs no heap allocations, so you can opt out by specifying -Alloc.
  • Introduce a wrapper type, Checked<T>, which provides arithmetic operations that don’t require +Fail, but instead return an Optional<T> as the result. This would mean making the signatures of the arithmetic traits more general, or inlining the Optional into the representation of Checked.

Neither is unequivocally better, and they have different use cases, so I think it makes sense to provide both.

Modifiers

In addition to Checked<T>, Kitten provides wrapper types for different arithmetic modes:

  • Wrapped<T> is a T where overflow is defined to wrap modulo the range of T.
  • Unchecked<T> is a T where overflow is implementation-defined. I’m tempted to make this type require the +Unsafe permission, but that’s intended for operations that can violate memory safety, which Unchecked<T> cannot.
  • Saturating<T> is a T where overflow results in clamping to the range of T; this is useful in some signal-processing applications.

Platform Independence

All the basic arithmetic types should be provided on all platforms. That may require emulation in some cases, such as 64-bit arithmetic on a 32-bit processor. Implementations can provide additional platform-specific types such as Int128 or Float80.

SIMD operations are implemented on container-like types such as Vector4<Float32>; naïve-looking code such as:

vec { (* 2) } map

Should be lowered to the single instruction:

addps %xmm0, %xmm0

This is subtle, and I haven’t worked out all the details yet, but SIMD operations are important enough for performance-critical code that it seems worthwhile to consider them early on.

Licensing Problems

For arbitrary-precision types, I’m in a bit of a bind. I’m not equipped to write a native Kitten bignum library with performance on par with GMP, but because GMP is licensed under LGPL, it can’t be used freely. For instance, we can’t statically link it into generated executables, making distribution more difficult and negatively impacting performance.

TL;DR

  • Kitten has all the usual machine integer and float types that systems programmers love.
  • Literal types are explicit, with some sane defaults: signed Int32 and Float64.
  • Overflow is checked by default, with various type-system features for more control.
  • The standard library provides arbitrary-precision and SIMD operations.

25 January 2016

“Weird” is intellectually lazy

Saying “X is weird” is equivalent to saying “I don’t understand X, and I blame X for it.”

I often have to make this point in discussions of programming languages. I enjoy writing in Haskell, and I’ve taught it to a number of developers. People who are accustomed to imperative programming languages are quick to term Haskell “weird”. When I ask what they mean by this, it usually boils down to the fact that it’s different from what they’re accustomed to, and they don’t understand why the differences are necessary or useful.

A “weird” language feature is one that seems gratuitous because, almost by definition, you don’t yet understand why the language authors would’ve designed it that way. Once you understand the reasoning behind it, the weirdness evaporates. Of course, you might still reasonably hate the design, but it’s impossible to give a fair critique without a proper understanding.

For instance, criticising Go for its lack of generics misses the point of what Go is for—to be a simple, boring language, with minor improvements on the status quo, that admits good tooling. And Go does this admirably. It’s not going to advance the state of the art in the slightest. It’s not going to shape the future of programming. It is going to be a language that people use for writing regular old code right now.

To shamelessly misappropriate a quote from Tim Minchin: “Throughout history, every mystery ever solved turned out to be not magic.” If you find yourself thinking that something is weird or magical, that’s not an excuse to dismiss it out of hand—rather, it’s an excellent opportunity to dig in and learn something enlightening.

14 January 2016

Thoughts on using fractional types to model mutable borrowing of substructures

In type theory, a pair is denoted by a product type, with a constructor pair and two projections fst and snd:

$mathjax[ \textrm{pair}(a, b) : a \times b \\ \textrm{fst} : a \times b \to a \\ \textrm{snd} : a \times b \to b ]$

These projections are destructive: the pair is consumed, and one of the values is discarded. However, when we have a large structure, often it’s cheaper to modify parts of it in place, rather than duplicating and reconstructing it. So we’d like to come up with a system that lets us lend a field out for modification temporarily, and repossess the modified field afterward. That would let us model mutation in terms of pure functions, with the same performance as direct mutation.

Obviously our projections can’t destroy either of the fields, because we want to be able to put them back together into a real pair again. So the functions must be linear—the result has to contain both $mathjax(a)$ and $mathjax(b)$. What result can we return other than $mathjax(a \times b)$? I dunno, how about something isomorphic?

$mathjax[ \textrm{lend-fst} : a \times b \to a \times ((a \times b) / a) \\ \textrm{lend-snd} : a \times b \to b \times ((a \times b) / b) ]$

So this gives us a semantics for quotient types: a quotient $mathjax(a / b)$ denotes a value of type $mathjax(a)$ less a “hole” that we can fill in with a value of type $mathjax(b)$. We can implement this by drawing another isomorphism between $mathjax((a \times b) / a)$ and something like $mathjax(\textrm{ref}(a) \times (\textrm{hole}(\textrm{sizeof}(a)) \times b))$—the “hole” is an appropriately sized slot that we can fill in with a value of type $mathjax(a)$, and the quotient denotes a write-once mutable reference to that slot.

The inverses of the lending functions are the repossession functions, which repossess the field and reconstitute the pair by filling in the reference:

$mathjax[ \textrm{repo-fst} : a \times ((a \times b) / a) \to a \times b \\ \textrm{repo-snd} : b \times ((a \times b) / b) \to a \times b ]$

When we fill in the slot, the reference and quotient are destroyed, and we get back a complete data structure again. The references are relative, like C++ member references, so we could use this to implement, say, swapping fields between two structures.

I’m pretty sure that if you made these types non-duplicable, it’d guarantee that the lifetime of a substructure reference would be a sub-lifetime of the lifetime of the structure. So, notably, these lending functions wouldn’t need to actually do anything at runtime; they’d simply give us a type-level way to ensure that only one mutable reference to a particular field of a value could exist at a time. The same property is enforced in Rust with its borrow system, but in this system, we don’t need any concept of references or lifetimes. Rather than enforce a static approximation of stack/scope depth, as in Rust, we can enforce a static approximation of structure depth.

We probably need a subtyping rule, which states that a quotient type is a subtype of its numerator, as long as the denominator is not void:

$mathjax[ a / b \le a, b \ne 0 ]$

You can’t borrow a void field from a structure anyway, because you can’t construct a void value, so the side condition should never come up in practice.

In order for this to be useful in a real language, we probably need an additional “swap” rule, which states that it doesn’t matter in which order we lend or repossess substructures:

$mathjax[ a / b / c = a / c / b ]$

And clearly $mathjax(a / 1)$ is isomorphic to $mathjax(a)$, because every structure is perfectly happy to lend out useless values, and repossessing them does nothing.

We can use negative types to denote borrowing of fields from discriminated unions. If the type $mathjax(a + b)$ denotes a discriminated union like Haskell’s Either a b (which has a value of Left x where x is of type a, or Right y where y is of type b) then we also have two projections—but they’re weird:

$mathjax[ \textrm{lend-left} : a + b \to a + ((a + b) - a) \\ \textrm{lend-right} : a + b \to b + ((a + b) - b) ]$

Here, when we try to lend out a field, it might not be present. So we get either the lent field, or an instance of a smaller sum type. This can be used to implement pattern-matching: try to lend each field in turn, reducing the space of possible choices by one field at a time. If you get all the way down to $mathjax(0)$, you have a pattern-match failure. This automatically rules out redundant patterns: you can’t try to match on a field you’ve already tried, because it’s no longer present in the type.

That’s about as far as I’ve gotten with this line of thinking. If anyone has ideas for how to extend it or put it into practice, feel free to comment.