OCaml Planet

October 23, 2014

OCamlCore Forge News

Recent reboots explained: the forge has been compromised

Earlier today, I was connected while the Forge was under heavy load. This has often been the case before most of the recent reboot. This time I was able to identify the process causing the problem and stop it early. Unfortunately, an intruder was able to exploit shellshock through gitweb.cgi. This means that the attacker was able to run a process on the server as www-data for a few hours. I have studied the script used and it is a IRC server in Perl. I think the main goal of the server was to attack other computers. I am not sure that any files were compromised. The script has been removed and my security tool (rkhunter) cannot find any other problems. I have upgraded the system to squeeze-lts to fix the shellshock CVE. The following script can test the files, that have been uploaded to the forge, against what is currently on the server (see the link). AFAIK, none of my tarball have been changed. Please check your files as well and contact me if you find any problems. Sorry for the inconvenience Sylvain Le Gall Use this command to run the script: $> ocaml download-test.ml */dist/*.tar.gz https://forge.ocamlcore.org/frs/download.php/1478/download-test.ml

by Sylvain Le Gall at October 23, 2014 10:53 PM

OCaml Platform

OPAM 1.2.0 Released

We are very proud to announce the availability of OPAM 1.2.0.

Upgrade from 1.1

Simply follow the usual instructions, using your preferred method (package from your distribution, binary, source, etc.) as documented on the homepage.

NOTE: There are small changes to the internal repository format (~/.opam). It will be transparently updated on first run, but in case you might want to go back and have anything precious there, you're advised to back it up.


Lot of work has been put into providing a cleaner interface, with helpful behaviour and messages in case of errors.

The documentation pages also have been largely rewritten for consistency and clarity.

New features

This is just the top of the list:

  • A extended and versatile opam pin command. See the Simplified packaging workflow
  • More expressive queries, see for example opam source
  • New metadata fields, including source repositories, bug-trackers, and finer control of package behaviour
  • An opam lint command to check the quality of packages

For more detail, see the announcement for the beta, the full changelog, and the bug-tracker.

Package format

The package format has been extended to the benefit of both packagers and users. The repository already accepts packages in the 1.2 format, and this won't affect 1.1 users as a rewrite is done on the server for compatibility with 1.1.

If you are hosting a repository, you may be interested in these administration scripts to quickly take advantage of the new features or retain compatibility.

by Louis Gesbert at October 23, 2014 12:00 AM

October 22, 2014

Sylvain Le Gall

Release of OASIS 0.4.5

On behalf of Jacques-Pascal Deplaix

I am happy to announce the release of OASIS v0.4.5.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Here is a quick summary of the important changes:

  • Build and install annotation files.
  • Use builtin bin_annot and annot tags.
  • Tag .mly files on the same basis as .ml and .mli files (required by menhir).
  • Remove 'program' constraint from C-dependencies. Currently, when a library has C-sources and e.g. an executable depends on that library, then changing the C-sources and running '-build' does not yield a rebuild of the library. By adding these dependencies (rather removing the constraint), it seems to work fine.
  • Some bug fixes


  • no_automatic_syntax (alpha): Disable the automatic inclusion of -syntax camlp4o for packages that matches the internal heuristic (if a dependency ends with a .syntax or is a well known syntax).
  • compiled_setup_ml (alpha): Fix a bug using multiple arguments to the configure script.

This new version is a small release to catch up with all the fixes/pull requests present in the VCS that have not yet been published. This should made the life of my dear contributors easier -- thanks again for being patient.

I would like to thanks again the contributor for this release: Christopher Zimmermann, Jerome Vouillon, Tomohiro Matsuyama and Christoph Höger. Their help is greatly appreciated.

by gildor at October 22, 2014 10:42 PM

October 19, 2014

Shayne Fletcher



Stack overflow refers to a condition in the execution of a computer program whereby the stack pointer exceeds the address space allocated for the stack. The usual result of "blowing the stack" is swift and brutal abnormal termination of the program.

The amount of memory allocated by the operating system for a given program's stack is finite and generally little can be done by the programmer to influence the amount that will be made available. The best the programmer can really do is to use what's given wisely.

We can get a sense of the limits of the stack in practical terms with a program like the following.

let rec range s e =
if s >= e then []
else s :: range (s + 1) e

let rec loop i =
let n = 2.0 ** (i |> float_of_int) |> int_of_float in
let _ = range 0 n in
loop (i + 1)
| Stack_overflow ->
Printf.printf "Stack overflow at i = %d, n = %d\n" i n

let () = loop 0
range is a function that produces the half-open range $\left[s, e\right)$ - the ordered sequence $\left\{s, s + 1, s + 2, \dots, e - 2, e - 1\right\}$. Note that range is defined in terms of itself, that is, it is a recursive function. The idea is to use it in an unbounded loop to build sequences of increasing lengths of powers of $2$ : ${2^0, 2^1, 2^2, \dots}$. We set it off and when we encounter stack overflow, terminate the program gracefully reporting on the power of $2$ found to give rise to the condition. In my case I found that I was able to make $\approx 2^{19} = 524,288$ recursive calls to range before the stack limit was breached. That's very limiting. For realistic programs, one would hope to be able to produce sequences of lengths much greater than that!

What can be done? The answer lies in the definition of range and that thing called tail-recursion. Specifically, range is not tail-recursive. To be a tail-recursive function, the last thing the function needs do is to make a recursive call to itself. That's not the case for range as written as the recursive call to itself is the second to last thing it does before it returns (the last thing it does is to 'cons' a value onto the list returned by the recursive call).

Why being tail-recursive is helpful is that tail-calls can be implemented by the compiler without requiring the addition of a new "stack frame" to the stack. Instead, the current frame can be replaced in setting up the tail-call being modified as necessary and effectively the recursive call is made to be a simple jump. This is called tail-call elimination and its effect is to allow tail-recursive functions to circumvent stack overflow conditions.

Here's a new definition for range, this time implemented with tail-calls.

let range s e =
let rec aux acc s e =
if s >= e then acc
else aux (s :: acc) (s + 1) e
in List.rev (aux [] s e)
With this definition for range I find I can build sequences of length up to around $\approx 2^{26} = 67,108,864$ elements long without any sign of stack overflow which is a huge improvement! At around this point though, my sequence building capabilities start to be limited by the amount of physical memory present on my PC but that's a different story entirely.

by Shayne Fletcher (noreply@blogger.com) at October 19, 2014 05:39 PM

October 17, 2014

Erik de Castro Lopo

Haskell : A neat trick for GHCi

Just found a really nice little hack that makes working in the GHC interactive REPL a little easier and more convenient. First of all, I added the following line to my ~/.ghci file.


All that line does is define a GHC_INTERACTIVE pre-processor symbol.

Then in a file that I want to load into the REPL, I need to add this to the top of the file:

  {-# LANGUAGE CPP #-}

and then in the file I can do things like:

  import Data.Aeson.Encode.Pretty

  prettyPrint :: Value -> IO ()
  prettyPrint = LBS.putStrLn . encodePretty

In this particular case, I'm working with some relatively large chunks of JSON and its useful to be able to pretty print them when I'm the REPL, but I have no need for that function when I compile that module into my project.

October 17, 2014 10:16 PM

October 16, 2014


OCaml 4.02.1 released

Windows binary builds for OCaml 4.02.1 are now available. Download links for the 32-bit and 64-bit build can be found in the download section.

You can upgrade from an existing installation of OCaml 4.02.0 with


or from source code with:

godi_perform -rebuild -newer

The OCaml 4.02.0 builds are not longer maintained. If you still want to continue to use the binary builds for this OCaml version (or revert back to it), you have to change the repository address at /opt/wodi(32|64)/etc/godi.conf from




October 16, 2014 10:00 PM

Jane Street

What the interns have wrought: RPC_parallel and Core_profiler

We're in the midst of intern hiring season, and so we get a lot of questions about what it's like to be an intern at Jane Street. One of the things people most want to know is what kind of projects they might work on as an intern.

That's of course hard to answer precisely, since we don't yet know what next summer's intern projects are going to be. But you can get a sense by seeing some of what interns have done in the past. To that end, I thought I'd describe a couple of intern projects that were done this past summer.


Rpc_parallel is a new library written by Todd Lubin (who will be returning to Jane Street full-time next fall), aimed at making parallel programming in OCaml simpler and safer.

Writing programs that take advantage of multiple cores, and even of multiple computers, is of course a necessity. In OCaml, such parallelism is typically achieved by writing multiple single-threaded programs that use message passing to coordinate.

We have lots of parallel message-passing systems, but while these share a lot of infrastructure (notably, the Async_rpc library), we've never really settled on a lightweight way to construct complete parallel programs. Instead, each such program tends to have its own little process coordination framework.

A while back, we wrote a library called Async_parallel, (described here). Async_parallel does a lot of things right -- in particular, it makes it easy to spin up and manage new processes and distribute work between them.

But Async_parallel has a problem. It is based on OCaml's marshal facility, which allows you to serialize arbitrary data, including functions, between processes. This sounds great, but it has a dark side: marshalling arbitrary functions turns out to be error prone. In particular, in order to send a function over the wire, you also have to send a copy of any data that that function implicitly depends on.

Unfortunately, it's hard to know what kind of data is hidden behind a function, which can cause a few different forms of trouble: it might send much more data than you expect, it might fail unexpectedly if it hits one of the forms of data that can't be marshalled, or it might lead to crazy and hard-to-predict behavior if some of the data required by the function is meant to be mutable shared state.

Instead, we wanted a library that was more typeful and constrained in terms of what was sent over the wire. This pushed us towards a design where, at the cost of some extra verbosity, we explicitly declare the types of data that is sent. In exchange, we get a system that is easier to understand and debug.

One of the great things about Rpc_parallel is how fast it came together. Todd got it into a sufficiently good state by the middle of the summer that he was able to use it for his other projects (interns typically have at least two major projects over the course of the summer).

Rpc_parallel also benefitted from some world-hopping collaboration. Interns spend at least a week in an office other than their home office, and Todd ended up visiting Hong Kong. While there, he ended up spending a lot of time talking and working with Chengqi Song, who had a lot of experience with Async_parallel. Out of those discussions came a complete redesign and rewrite of the library, factoring out the core primitives for coordinating across multiple processes, and making the overall library simpler and more general.

By the end of the summer, a few other people picked up and starting using it for other projects, and last week, it was released open source, so you can take a look at it yourself on github.


Profiling is surprisingly hard, and as such it's perhaps unsurprising that there are lots of ways of doing it. If you want to understand the cost of an individual operation, you might want a micro-benchmarking library like Haskell's Criterion, or our own Core_bench. If you're trying to understand properties of a whole application, like which lines of code it's spending its time on, or how many cache-misses are occurring, you might want to use something like Linux's perf tools, which use CPU-level counters to efficiently gather profiling statistics from your program.

Another useful technique is for the programmer to modify the source to add explicit probes that keep track of when certain program points are reached, and write out that information to a log that can be analyzed after the fact.

Daniel Richman (who will be returning for an internship next summer) worked along with Roshan James (formerly an intern himself, now fulltime) on a library called Core_profiler, which aims to make the use of such probes easy and efficient. Efficiency matters quite a bit, because if logging a probe takes more time than the thing you're trying to measure, you basically can't extract reliable data. Keeping the overhead small, therefore, is a must.

Accordingly, a lot of Daniel's time was spent thinking very carefully about how to write the probes in a way that would only minimally disrupt the execution of the program. He started a simple but less efficient library, called Stats_reporting, which took about 60ns and two words of allocation per probe, and started machining it down from there.

The first step was to avoid all allocation, which meant we could no longer use bin-prot, our standard binary serialization technique, since bin-prot requires that you allocate an OCaml object representing the data to be serialized. So they moved to using an internal library called Protogen for generating zero-allocation serialization code.

That brought us down to about 30ns, and zero allocations. We then decided to try out writing our own hand-rolled binary protocol, so we could have a yet-more-closely optimized binary layout. That brought us down to about 20-25ns. The next hack was to customize the binary format yet more by packing multiple values into a single, 63-bit OCaml integer. That, plus some cleverness to make sure that writes were word-aligned, brought the cost of a simple probe down to 12ns. In addition to the design changes, Daniel also spent a lot of time carefully examining the assembly code, to make sure that there were no surprises from the code generator.

We're pretty happy with the end result. We think that Core_profiler probes are now a good bit cheaper than the dynamic probes you can insert using a system like DTrace, and are in any case the best tool we have for tracking the performance of even relatively quick code-paths. And, as a nice bonus to all this optimization, the offline processing got about twice as fast as a side effect of the runtime improvements.

There's more to be said about both of these projects, and about the many other projects that were done this summer. If you're interested in applying, you can do so here:


You don't need to know anything about finance or functional programming to apply. But if you come, you'll learn a ton about both by the time you're done.

by Yaron Minsky at October 16, 2014 12:48 PM

Andrej Bauer

TEDx “Zeroes”

I spoke at TEDx University of Ljubljana. The topic was how programming influences various aspects of life. I showed the audence how a bit of simple programming can reveal the beauty of mathematics. Taking John Baez’s The Bauty of Roots as an inspiration, I drew a very large image (20000 by 17500 pixels) of all roots of all polynomials of degree at most 26 whose coefficients are $-1$ or $1$. That’s 268.435.452 polynomials and 6.979.321.752 roots. It is two degrees more than Sam Derbyshire’s image,  so consider the race to be on! Who can give me 30 degrees?

The code

The zeroes GitHub repository contains the code.

There really is not much to show. The C program which computes the zeroes uses the GNU Scientific Library routines for zero finding and is just 105 lines long. It generates a PPM image which I then processed with ImageMagick and ffmpeg. The real work was in the image processing and the composition of movies. I wrote a helper Python program that lets me create floyovers the big image, and I became somewhat of an expert in the use of ImageMagick.

The code also contains a Python program for generating a variation of the picture in which roots of lower degrees are represented by big circles. I did not show any of this in the TEDx talk but it is available on Github.

Oh, and a piece of Mathematica code that generates the zeroes fits into a tweet.

The videos

The “Zeroes” Vimeo album contains the animations. The ones I showed in the TEDx talk are in Full HD (1920 by 1080). There is also a lower resolution animation of how zeroes move around when we change the coefficients. Here is one of the movies, but you really should watch it in Full HD to see all the details.

<iframe allowfullscreen="" frameborder="0" height="281" src="http://player.vimeo.com/video/109062970?title=0&amp;byline=0&amp;portrait=0" width="500"></iframe>

The pictures

  • The computed image zeroes.ppm.gz (125 MB) at 20000 by 17500 pixels is stored in the PPM format. The picture is dull gray, and is not meant to be viewed directly.
  • The official image zeroes26.png (287 MB) at 20000 by 175000 pixels in orange tones. Beware,  it can bring an image viewing program to its knees.
  • I computed tons of closeups to generate the movies. Here are the beginnings of each animation available at Vimeo, and measly 1920 by 1080 pixels each (click on them).
    • The whole image: zoom
    • Zoom at $i$: arc
    • Zoom at $(1 + \sqrt{3} i)/2$:tofringe
    • Zoom at $1.4 i$:fringe
    • Zoom at $3 e^{7 i \pi/24}/2$:tozero
    • Zoom at $1$:unzoom

by Andrej Bauer at October 16, 2014 07:01 AM

October 15, 2014

OCamlCore Forge News

OCaml Forge maintenance

Recently the OCaml forge has required a lot of hardware reboots. The server is probably near end of life and I need to upgrade the whole infrastructure to a recent server. Rackspace Cloud, as part of their developer support program, is kindly providing a new host for the forge. You may encounter a few more problems in the coming weeks, due to this migration. Ping me if anything is utterly wrong (sylvain ... le-gall.net).

by Sylvain Le Gall at October 15, 2014 06:58 AM