OCaml Planet

March 23, 2015

Richard Jones

Mini Cloud/Cluster v2.0

Last year I wrote and rewrote a little command line tool for managing my virtualization cluster.

Of course I could use OpenStack RDO but OpenStack is a vast box of somewhat working bits and pieces. I think for a small cluster like mine you can get the essential functionality of OpenStack a lot more simply — in 1300 lines of code as it turns out.

The first thing that small cluster management software doesn’t need is any permanent daemon running on the nodes. The reason is that we already have sshd (for secure management access) and libvirtd (to manage the guests) out of the box. That’s quite sufficient to manage all the state we care about. My Mini Cloud/Cluster software just goes out and queries each node for that information whenever it needs it (in parallel of course). Nodes that are switched off are handled by ignoring them.

The second thing is that for a small cloud we can toss features that aren’t needed at all: multi-user/multi-tenant, failover, VLANs, a nice GUI.

The old mclu (Mini Cluster) v1.0 was written in Python and used Ansible to query nodes. If you’re not familiar with Ansible, it’s basically parallel ssh on steroids. This was convenient to get the implementation working, but I ended up rewriting this essential feature of Ansible in ~ 60 lines of code.

The huge down-side of Python is that even such a small program has loads of hidden bugs, because there’s no safety at all. The rewrite (in OCaml) is 1,300 lines of code, so a fraction larger, but I have a far higher confidence that it is mostly bug free.

I also changed around the way the software works to make it more “cloud like” (and hence the name change from “Mini Cluster” to “Mini Cloud”). Guests are now created from templates using virt-builder, and are stateless “cattle” (although you can mix in “pets” and mclu will manage those perfectly well because all it’s doing is remote libvirt-over-ssh commands).

$ mclu status
ham0                     on
                           total: 8pcpus 15.2G
                            used: 8vcpus 8.0G by 2 guest(s)
                            free: 6.2G
ham1                     on
                           total: 8pcpus 15.2G
                            free: 14.2G
ham2                     on
                           total: 8pcpus 30.9G
                            free: 29.9G
ham3                     off

You can grab mclu v2.0 from the git repository.


by rich at March 23, 2015 02:26 PM

March 22, 2015

Jane Street

A lighter Core

We recently released a version of our open source libraries with a much anticipated change --- Async_kernel, the heart of the Async concurrent programming library, now depends only on Core_kernel rather than on Core.

This sounds like a dull, technical change, and it kind of is. But it's also part of a larger project to make our libraries more lightweight and portable, and so suitable for a wider array of users and applications.

We've actually been working on these issues for a while now, and this seems like a good time to review some of the changes we've made over the years, and what's still to come.

Reorganizing for portability

Core has always had dependencies on Unix, including OCaml's Unix library, as well as some other parts of the Unix environment, like the Unix timezone files. This has long been a problem for porting to Windows, but more recently, the issue has loomed for two other increasingly important platforms for OCaml: Javascript and Mirage.

To help fix this problem, in 2013 we released a library called Core_kernel, which is the portable subset of Core that avoids Unixisms as well as things like threads that don't match well with the Javascript and Mirage back-ends.

In the same vein, we refactored Async, our concurrent programming library, into a set of layers (modeled on the design of the similar Lwt library) that both clarified the design and separated out the platform specific bits. Async_kernel is the lowest level and most portable piece, hosting the basic datastructures and abstractions. Async_unix adds a Unix-specific scheduler, and Async_extra builds further os-specific functionality on top.

Until recently, the fly in this ointment was that Async_kernel still depended on Core, rather than Core_kernel, because only Core had a time library. Making Async_kernel only require Core_kernel was a bigger project than you might imagine, in the end leading us to change Timing_wheel, a core datastructure for Async and several other critical libraries at Jane Street, to use an integer representation of time instead of the float-based one from Core.

Already, some experiments are underway to take advantage of this change, including some internal efforts to get Async working under javascript, and external efforts to get cohttp's Async back-end to only depend on Async_kernel.

I'm hoping that yet more of this kind of work will follow.

Module Aliases

One long-running annoyance with OCaml is the lack of an effective namespace mechanism. For a long time, the only choice was OCaml's packed modules, which let you take a collection of modules and merge them together into one mega-module. Some kind of namespace mechanism is essential at scale, and so we used packed modules throughout our libraries.

Unfortunately, packed modules have serious downsides, both in terms of compilation time and executable sizes. We've been talking to people about this and looking for a solution for a long time. You can check out this epic thread on the platform list if you want to see some of the ensuing conversation.

A solution to this problem finally landed in OCaml 4.02, in the form of module aliases. I'll skip the detailed explanation (you can look here if you want to learn more), but the end result was great: our compilation times immediately went down by more than a factor of 3, and it gave us a path towards dropping packed modules altogether, thus reducing executable sizes and making incremental compilation massively more efficient.

The work on dropping packed modules has already landed internally, and will hopefully make it to the external release in a few months. The benefit to executable size is significant, with typical executables dropping in size by a factor of 2, but there is more to do. OCaml doesn't have aggressive dead code elimination, and that can lead to a lot of unnecessary stuff getting linked in. We're looking at some improvements we can make to cut down the dependency tree, but better dead code elimination at the compiler would really help.

Sharing basic types

Interoperability between Core and other OCaml libraries is generally pretty good: Core uses the same basic types (e.g., string, list, array, option) as other OCaml code, and that makes it pretty easy to mix and match libraries.

That said, there are some pain points. For example, Core uses a Result type (essentially, type ('a,'b) result = Ok of 'a | Error of 'b) quite routinely, and lots of other libraries use very similar types. Unfortunately, these libraries each have their own incompatible definitions.

The solution is to break out a simple type that the different libraries can share. After some discussion with the people behind some of the other libraries in question, I made a pull request to the compiler to add a result type to the stdlib.

This is a small thing, but small things matter. I hope that by paying attention to this kind of small issue, we can help keep interoperability between Core and the rest of the OCaml ecosystem smooth.

Eliminating camlp4

One concern I've heard raised about Core and Jane Street's other libraries is their reliance on camlp4. camlp4 is a somewhat divisive piece of infrastructure: it's long been the only decent way to do metaprogramming in OCaml, and as such has been enormously valuable; but it's also a complex and somewhat unloved piece of infrastructure that lots of people want to avoid.

camlp4 also makes tooling a lot more complicated, since there's no single syntax to target. Dev tools like ocp-indent and the excellent merlin have some terrible hacks to support some of the most common camlp4 syntax extensions, but the situation is clearly untenable.

You do need camlp4 to build Core, but you don't need camlp4 to use it, and in practice, that's good enough for most use cases. But for people who want to avoid camlp4 entirely, it's still a nuisance. Moreover, while you don't need camlp4 to use Core, it is convenient. For example, a lot of Core's idioms work best when you provide s-expression serializers for your types, and the sexplib syntax extension is an awfully convenient way to generate those functions.

Our plan is to simply eliminate our dependency on camlp4 entirely over the next 6 months, by switching to using ppx and extension points, a new approach to metaprogramming in OCaml that, like module aliases, landed in 4.02. We're currently rewriting all of our syntax extensions, and building tools to automatically migrate the code that depends on camlp4. People who want to continue to use the old camlp4 extensions are welcome to continue doing so, but we're cutting our dependency on them.


Even at the end of all this, we don't expect that Core and Async will suit everyone --- that's a hard bar to cross for any software package. But we do hope that through these efforts, an ever wider set of developers will be able to take advantage of the work we've done.

by Yaron Minsky at March 22, 2015 02:40 AM

March 19, 2015

Heidi Howard

Part 3: Running your own DNS Resolver with MirageOS

This article is the third in the “Running your own DNS Resolver with MirageOS” series. In the first part, we used the ocaml-dns library to lookup the hostname corresponding with an IP address using its Dns_resolver_mirage module. In the second part, we wrote a simple DNS server, which serves RRs from a zone file using the Dns_server_mirage module.

Today in the third part, we will combine the above to write a simple DNS resolver, which relays queries to another DNS resolver. Then we will compose this with our simple DNS server from last week, to build a resolver which first looks up queries in the host file and if unsuccessful will relay the query to another DNS resolver.

As always, the complete code for these examples is in ocaml-dns-examples.

3.1 DNS FoRwarder

When writing our simple DNS server, we used a function called serve_with_zonefile in Dns_server_mirage to service incoming DNS queries. Now we are going remove a layer of abstraction and instead use serve_with_processor:

val serve_with_processor: t -> port:int -> processor:(module PROCESSOR) -> unit Lwt.t
val serve_with_zonefile : t -> port:int -> zonefile:string -> unit Lwt.t

Now instead of passing the function a simple string, representing the filename of zonefile, we pass a first class module, satisfying the PROCESSOR signature. We can generate such a module by writing a process and using processor_of_process:

type ip_endpoint = Ipaddr.t * int

type 'a process = src:ip_endpoint -> dst:ip_endpoint -> 'a -> Dns.Query.answer option Lwt.t

module type PROCESSOR = sig
  include Dns.Protocol.SERVER

  (** DNS responder function.
      @param src Server sockaddr
      @param dst Client sockaddr
      @param Query packet
      @return Answer packet
  *)
  val process : context process
end

type 'a processor = (module PROCESSOR with type context = 'a)

val processor_of_process : Dns.Packet.t process -> Dns.Packet.t processor

So given a Dns.Packet.t process, which is a function of type:

src:ip_endpoint -> dst:ip_endpoint -> Dns.Packet.t -> Dns.Query.answer option Lwt.t

We can now service DNS packets. If we assume that myprocess is a function of this type, we can service DNS queries with the following unikernel

open Lwt
open V1_LWT
open Dns
open Dns_server

let port = 53

module Main (C:CONSOLE) (K:KV_RO) (S:STACKV4) = struct

  module U = S.UDPV4
  module DS = Dns_server_mirage.Make(K)(S)

  let myprocess ~src ~dst packet = ...

  let start c k s =
    let server = DS.create s k in
    let processor = ((Dns_server.processor_of_process myprocess) :> (module Dns_server.PROCESSOR)) in 
    DS.serve_with_processor server ~port ~processor
end

Now we will write an implementation of myprocess which will service DNS packets by forwarding them to another DNS resolver and then relaying the response.

Recall from part 1, that you can use the resolve function in Dns_resolver_mirage to do this. All that remains is to wrap invocation of resolve, in a function of type Dns.Packet.t process, which can be done as follows:

 
let process resolver ~src ~dst packet =
      let open Packet in
      match packet.questions with
      | [] -> (* we are not supporting QDCOUNT = 0  *)
          return None 
      | [q] -> 
         DR.resolve (module Dns.Protocol.Client) resolver 
         resolver_addr resolver_port q.q_class q.q_type q.q_name 
          >>= fun result ->
          return (Some (Dns.Query.answer_of_response result))) 
      | _ -> (* we are not supporting QDCOUNT > 1 *)
          return None
3.2 DNS server & forwarder

[this part requires PR 58 on ocaml-dns until it is merged in]

We will extend our DNS forwarded to first check a zonefile, this is achieve with just 3 extra lines:

...
DS.eventual_process_of_zonefiles server [zonefile]
>>= fun process ->
let processor = (processor_of_process (compose process (forwarder resolver)) :> (module Dns_server.PROCESSOR)) in
...

Here we are using compose to use two processes: one called process generated from the zonefile and one called forwarder, from the forwarding code in the last section.

Next time, we will extend our DNS resolver to include a cache.

 

 

by Heidi-ann at March 19, 2015 12:01 PM

GaGallium

Namespace archeology

At the very end of 2011, and then at the very beginning of 2013, I worked for some weeks on namespaces for OCaml (what they could be, why would we need them, what would be a good solution). The resulting proposal was too complex to gather steam, so I moved on -- and never got around to making the documents publicly available. Here they are.

In case you are interest in archeology of design proposals (or if you want to work on namespaces for OCaml in the future), you can now access

This work was done in collaboration with Didier Rémy, but also Nicolas Pouillard (for the second document), and really I enjoyed working on this and with them. The reception at the time was rather cold, as people thought the proposal too complex with respect to the actual needs. Alain Frisch proposed to "just extend the module M = N construct to do what we want", and this was later implemented by Jacques Garrigues as module aliases -- although some issues, such as linking against two different versions of the same library, are still unadressed.

The proposal suggested a language to describe namespaces, and a natural tendency of programming language enthusiasts is to make their language as expressive as possible. It's easy to make a design more frigthening than necessary by adding these couple of extra operators that make the design feel complete in this and that respect that maybe we don't really care about.

An interesting tidbit research-wise is that I was very interested in the fact that we could reflect namespaces as OCaml modules -- which I thought at the beginning to be impossible because namespaces where "open" things and modules "closed" things. In fact, we realized that modules could work for this, but that mixins would be an even better fit. A few months afterwards (and independently) Backpack ( Scott Kilpatrick, Derek Dreyer, Simon Peyton Jones and Simon Marlow) was announced as part of Scott Kilpatrick's PhD work. It was very exciting to see related ideas masterfully developed in this different context.

by Gabriel Scherer at March 19, 2015 08:00 AM

March 18, 2015

@typeocaml

Binomial Heap

As we described in the previous post, leftist tree is a binary tree based functional heap. It manipulates the tree structure so that the left branches are always the longest and operations follow the right branches only. It is a clever and simple data structure that fulfills the purpose of heap.

In this post, we present another functional heap called Binomial Heap ([1]). Instead of being a single tree strucutre, it is a list of binomial trees and it provides better performance than leftist tree on insert.

However, the reason I would like to talk about it here is not because of its efficiency. The fascination of binomial heap is that if there are N elements inside it, then it will have a determined shape, no matter how it ended up with the N elements. Personally I find this pretty cool. This is not common in tree related data structures. For example, we can not predict the shape of a leftist tree with N elements and the form of a binary search tree can be arbitrary even if it is somehow balanced.

Let's have a close look at it.

Binomial Tree

Binomial tree is the essential element of binomial heap. Its special structure is why binomial heap exists. Understanding binomial tree makes it easier to understand binomial heap.

Binomial tree's definition does not involve the values associated with the nodes, but just the structure:

  1. It has a rank r and r is a natural number.
  2. Its form is a root node with a list of binomial trees, whose ranks are strictly r-1, r-2, ..., 0.
  3. A binomial tree with rank 0 has only one root, with an empty list.

Let's try producing some examples.

From point 3, we know the base case:

rank_0

Now, how about rank 1? It should be a root node with a sub binomial tree with rank 1 - 1 = 0:

rank_1

Let's continue for rank 2, which should have rank 1 and rank 0:

rank_2

Finally rank 3, can you draw it?

rank_3

$ 2^r $ nodes

If we pull up the left most child of the root, we can see:

r_2_r-1

This means a binomial tree with rank r can be seen as two binomial trees with the same rank r-1. Furthermore, because that two, and rank 0 has one node, then in term of the number of nodes, for a binomial tree with rank r, it must have $ 2^r $ nodes, no more, no less.

For example, rank 0 has 1 node. Rank 1 is 2 rank 0, so rank 1 has $ 2 * 1 = 2 $ nodes, right? Rank 2 then has $ 2 * 2 = 4 $ nodes, and so on so forth.

Note that $ 2^r = 1 + 2^r-1 + 2^r-2 + ... + 2^0 $ and we can see that a rank r tree's structure fits exactly to this equation (the 1 is the root and the rest is the children list).

Two r-1 is the way to be r

The definition tells us that a rank r tree is a root plus a list of trees of rank r-1, r-2, ..., and 0. So if we have a binomial tree with an arbitrary rank, can we just insert it to another target tree to form a rank r tree?

For example, suppose we have a rank 1 tree, can we insert it to the target tree below for a rank 3 tree?

wrong

Unfortunately we cannot, because the target tree won't be able to exist in the first place and it is not a valid binomial tree, is it?

Thus in order to have a rank r tree, we must have two r-1 trees and link them together. When linking, we need to decide which tree is the new root, depending on the context. For the purpose of building a min heap later, we assume we always let the root with the smaller key be the root of the new tree.

code

Defining a binomial tree type is easy:

(* Node of key * child_list * rank *)
type 'a binomial_t = Node of 'a * 'a binomial_t list * int  

Also we can have a function for a singleton tree with rank 0:

let singleton_tree k = Node (k, [], 0)  

Then we must have link function which promote two trees with same ranks to a higher rank tree.

let link ((Node (k1, c1, r1)) as t1) ((Node (k2, c2, r2)) as t2) =  
  if r1 <> r2 then failwith "Cannot link two binomial trees with different ranks"
  else if k1 < k2 then Node (k1, t2::c1, r1+1)
  else Node (k2, t1::c2, r2+1)

One possibly interesting problem can be:

Given a list of $ 2^r $ elements, how to construct a binomial tree with rank r?

We can borrow the idea of merging from bottom to top for this problem.

from_list

let link_pair l =  
  let rec aux acc = function
    | [] -> acc
    | _::[] -> failwith "the number of elements must be 2^r"
    | t1::t2::tl -> aux (link t1 t2 :: acc) tl
  in
  aux [] l

let to_binomial_tree l =  
  let singletons = List.map singleton_tree l in
  let rec aux = function
    | [] -> failwith "Empty list"
    | t::[] -> t
    | l -> aux (link_pair l)
  in
  aux singletons

Binomial coefficient

If we split a binomial tree into levels and pay attention to the number of nodes on each level, we can see:

binomial coefficient

So from top to bottom, the numbers of nodes on levels are 1, 3, 3 and 1. It happens to be the coefficients of $ (x+y)^3 $ .

Let's try rank 4:

binomial_coefficient_2

They are 1, 4, 6, 4 and 1, which are the coefficients of $ (x+y)^4 $ .

The number of nodes on level k ( 0 <= k <= r) matches $ {r}\choose{k} $, which in turn matches the kth binomial coefficient of $ (x+y)^r $. This is how the name binomial tree came from.

Binomial Heap

A binomial heap is essentially a list of binomial trees with distinct ranks. It has two characteristics:

  1. If a binomial heap has n nodes, then its shape is determined, no matter what operations have been gone through it.
  2. If a binomial heap has n nodes, then the number of trees inside is O(logn).

The reason for the above points is explained as follows.

As we already knew, a binomial tree with rank r has $ 2^r $ nodes. If we move to the context of binary presentation of numbers, then a rank r tree stands for the case where there is a list of bits with only the rth slot turned on.

binary

Thus, for n number of nodes, it can be expressed as a list of binomial trees with distinct ranks, because the number n is actually a list of bits with various slots being 1. For example, suppose we have 5 nodes (ignoring their values for now), mapping to a list of binomial trees, we will have:

origin

This is where binomial heap comes from.

  1. Since a number n has determined binary presentation, a binomial heap also has fixed shape as long as it has n nodes.
  2. In addition, because n has O(logn) effective bits, a binomial heap has O(logn) binomial trees.
  3. If we keep each binomial tree having the min as the root, then for a binomial heap, the overall minimum elements is on of those roots.

Let's now implement it.

Type and singleton

It is easy.

type 'a binomial_heap_t = 'a binomial_t list

insert

When we insert a key k, we just create a singleton binomial tree and try to insert the tree to the heap list. The rule is like this:

  1. If the heap doesn't have a rank 0 tree, then directly insert the new singleton tree (with rank 0) to the head of the list.
  2. If the heap has a rank 0 tree, then the two rank 0 tree need to be linked and promoted to a new rank 1 tree. And we have to continue to try to insert the rank 1 tree with the rest of the list that potentiall starts with a existing rank 1 tree.
  3. If there is already a rank 1 tree, then link and promot to rank 2... so on so forth, until the newly promoted tree has a slot to fit in.

Here are two examples:

insert_1

insert_2

The insert operation is actually the addition between 1 and n in binary presentation, in a revered order.

let insert k h =  
  let aux ((Node (_, _, r1) as bt1) = function
    | [] -> []
    | (Node (_, _, r2) as bt2)::tl ->
      if r1 = r2 then aux (link bt1 bt2) tl
      else bt2::tl
  in
  aux (singleton_tree k) h

If the heap is full as having a consecutive series of ranks of trees starting from rank 0, we need O(logn) operations to finish the insert. However, once it is done, most of the lower rank slots are empty (like shown in the above figure). And for later new insert, it won't need O(logn) any more. Thus, The time complexity of insert seems to be O(logn), but actually amortised O(1).

Note the above insert description is just for demonstration purpose. Like in Leftist tree, merge is the most important operation for binomial heap and insert is just a simpler merge.

merge

The merge is like this:

  1. Get the two heads (bt1 and bt2) out of two heaps (h1 and h2).
  2. If rank bt1 < rank bt2, then bt1 leaves first and continue to merge the rest of h1 and h2.
  3. If rank bt1 > rank bt2, then bt2 leaves first and continue to merge h1 and the rest of h2.
  4. If rank bt1 = rank bt2, then link bt1 bt2, add the new tree to the rest of h1 and merge the new h1 and the rest of h2.

I will skip the digram and directly present the code here:

let rec merge h1 h2 =  
  match h1, h2 with
  | h, [] | [], h -> h
  | (Node (_, _, r1) as bt1)::tl1, (Node (_, _, r2) as bt2)::tl2 ->
    if r1 < r2 then bt1::merge tl1 h2
    else if r1 > r2 then bt2::merge h1 tl2
    else merge (link bt1 bt2::tl1) tl2

(* a better and simpler version of insert *)
let insert' k h = merge [singleton_tree k] h  

The time complexity is O(logn).

get_min

We just need to scan all roots and get the min key.

let get_min = function  
  | [] -> failwith "Empty heap"
  | Node (k1, _, _)::tl ->
    List.fold_left (fun acc (Node (k, _, _)) -> min acc k) k1 tl

For achieve O(1), we can attach a minimum property to the heap's type. It will always record the min and can be returned immediately if requested. However, we need to update this property when insert, merge and delete_min. Like every other book does, this modification is left to the readers as an exercise.

delete_min

delete_min appears as a little bit troublesome but actually very neat.

  1. We need to locate the binomial tree with min.
  2. Then we need to merge the trees on its left and the trees on its right to get a new list.
  3. It is not done yet as we need to deal with the min binomial tree.
  4. We are lucky that a binomial tree's child list is a heap indeed. So we just need to merge the child list with the new list from point 2.
let key (Node (k, _, _)) = k  
let child_list (Node (_, c, _)) = c

let split_by_min h =  
  let rec aux pre (a, m, b) = function
    | [] -> List.rev a, m, b
    | x::tl ->
      if key x < key m then aux (x::pre) (pre, x, tl) tl
      else aux (x::pre) (a, m, b) tl
  in
  match h with 
    | [] -> failwith "Empty heap"
    | bt::tl -> aux [bt] ([], bt, []) tl

let delete_min h =  
  let a, m, b = split_by_min h in
  merge (merge a b) (child_list m)

Binomial Heap vs Leftist Tree

|               | get_min                                 | insert         | merge   | delete_min |
|---------------|-----------------------------------------|----------------|---------|------------|
| Leftist tree  | O(1)                                    | O(logn)        | O(logn) | O(logn)    |
| Binomial heap | O(logn), but can be improved to be O(1) | Amortised O(1) | O(logn) | O(logn)    |


[1] Binomial Heap is also introduced in Purely Functional Data Structures.

by Jackson Tale at March 18, 2015 12:07 AM

OCaml Platform

OPAM 1.2.1 Released

OPAM 1.2.1 has just been released. This patch version brings a number of fixes and improvements over 1.2.0, without breaking compatibility.

Upgrade from 1.2.0 (or earlier)

See the normal installation instructions: you should generally pick up the packages from the same origin as you did for the last version -- possibly switching from the official repository packages to the ones we provide for your distribution, in case the former are lagging behind.

What's new

No huge new features in this point release -- which means you can roll back to 1.2.0 in case of problems -- but lots going on under the hood, and quite a few visible changes nonetheless:

  • The engine that processes package builds and other commands in parallel has been rewritten. You'll notice the cool new display but it's also much more reliable and efficient. Make sure to set jobs: to a value greater than 1 in ~/.opam/config in case you updated from an older version.
  • The install/upgrade/downgrade/remove/reinstall actions are also processed in a better way: the consequences of a failed actions are minimised, when it used to abort the full command.
  • When using version control to pin a package to a local directory without specifying a branch, only the tracked files are used by OPAM, but their changes don't need to be checked in. This was found to be the most convenient compromise.
  • Sources used for several OPAM packages may use <name>.opam files for package pinning. URLs of the form git+ssh:// or hg+https:// are now allowed.
  • opam lint has been vastly improved.

... and much more

There is also a new manual documenting the file and repository formats.

Fixes

See the changelog for a summary or closed issues in the bug-tracker for an overview.

Experimental features

These are mostly improvements to the file formats. You are welcome to use them, but they won't be accepted into the official repository until the next release.

  • New field features: in opam files, to help with ./configure scripts and documenting the specific features enabled in a given build. See the original proposal and the section in the new manual
  • The "filter" language in opam files is now well defined, and documented in the manual. In particular, undefined variables are consistently handled, as well as conversions between string and boolean values, with new syntax for converting bools to strings.
  • New package flag "verbose" in opam files, that outputs the package's build script to stdout
  • New field libexec: in <name>.install files, to install into the package's lib dir with the execution bit set.
  • Compilers can now be defined without source nor build instructions, and the base packages defined in the packages: field are now resolved and then locked. In practice, this means that repository maintainers can move the compiler itself to a package, giving a lot more flexibility.

by Louis Gesbert at March 18, 2015 12:00 AM

March 16, 2015

OCamlCore Forge News

ocaml-mysql 1.2.0 released

ocaml-mysql provides bindings to libmysqlclient. This release removes dependency on camlp4 and employs mysql_config to detect mysql installation paths

by ygrek at March 16, 2015 10:40 PM

OCaml EFL 1.13.0 released

Like the previous version, only version 1.8 and higher of the EFL and Elementary are required to build this version of OCaml EFL.

by Alexis Bernadet at March 16, 2015 10:40 PM