OCaml Planet

November 25, 2014

GaGallium

2-or-more approximation for intuitionistic logic

I just finished the proof of a property of the simply-typed lambda-calculus that hopefully can serve as a termination argument to decide whether a type has a unique inhabitant.

http://gallium.inria.fr/~scherer/drafts/2-or-more-approximation.pdf

The 10-pages note containing the proof is available, and all comments are warmly welcome -- in particular about an already-existing proof of this result, I don't know about any. The rest of this post is (a HeVeA rendering of) the introduction, motivating the problem and presenting the fact to be proved -- or disproved. Feel free to ask yourself whether it is true or false before looking at the full note. The answer is at the last sentence of the introduction (only in the note).

It is interesting that we don't have much intuition about whether the result is true or false -- I changed my mind several times about it, and people asked about it guessed different way. The proof, however, is relatively simple and not particularly technically involved -- but I spent a few days simplifying the concepts and the notations.

<style type="text/css"> .li-itemize{margin:1ex 0ex;} .li-enumerate{margin:1ex 0ex;} .dd-description{margin:0ex 0ex 1ex 4ex;} .dt-description{margin:0ex;} .toc{list-style:none;} .footnotetext{margin:0ex; padding:0ex;} div.footnotetext P{margin:0px; text-indent:1em;} .thefootnotes{text-align:left;margin:0ex;} .dt-thefootnotes{margin:0em;} .dd-thefootnotes{margin:0em 0em 0em 2em;} .footnoterule{margin:1em auto 1em 0px;width:50%;} .caption{padding-left:2ex; padding-right:2ex; margin-left:auto; margin-right:auto} .title{margin:2ex auto;text-align:center} .titlemain{margin:1ex 2ex 2ex 1ex;} .titlerest{margin:0ex 2ex;} .center{text-align:center;margin-left:auto;margin-right:auto;} .flushleft{text-align:left;margin-left:0ex;margin-right:auto;} .flushright{text-align:right;margin-left:auto;margin-right:0ex;} div table{margin-left:inherit;margin-right:inherit;margin-bottom:2px;margin-top:2px} td table{margin:auto;} table{border-collapse:collapse;} td{padding:0;} .cellpadding0 tr td{padding:0;} .cellpadding1 tr td{padding:1px;} pre{text-align:left;margin-left:0ex;margin-right:auto;} blockquote{margin-left:4ex;margin-right:4ex;text-align:left;} td p{margin:0px;} .boxed{border:1px solid black} .textboxed{border:1px solid black} .vbar{border:none;width:2px;background-color:black;} .hbar{border:none;height:2px;width:100%;background-color:black;} .hfill{border:none;height:1px;width:200%;background-color:black;} .vdisplay{border-collapse:separate;border-spacing:2px;width:auto; empty-cells:show; border:2px solid red;} .vdcell{white-space:nowrap;padding:0px; border:2px solid green;} .display{border-collapse:separate;border-spacing:2px;width:auto; border:none;} .dcell{white-space:nowrap;padding:0px; border:none;} .dcenter{margin:0ex auto;} .vdcenter{border:solid #FF8000 2px; margin:0ex auto;} .minipage{text-align:left; margin-left:0em; margin-right:auto;} .marginpar{border:solid thin black; width:20%; text-align:left;} .marginparleft{float:left; margin-left:0ex; margin-right:1ex;} .marginparright{float:right; margin-left:1ex; margin-right:0ex;} .theorem{text-align:left;margin:1ex auto 1ex 0ex;} .part{margin:2ex auto;text-align:center} .mprow{border-collapse:separate;border-spacing:2ex;width:100%;margin:0ex;} .vmprow{border-collapse:separate;border-spacing:2ex;width:100%;margin:0ex;empty-cells:show; border:solid fuchsia 2px} .mprcell{padding:0px;width:auto;} .vmprcell{padding:0px;width:auto;border:dotted fuchsia 2px;} .lstlisting{font-family:monospace;white-space:pre;margin-right:auto;margin-left:0pt;text-align:left} </style>

The correspondence between natural-deduction proofs of propositional intuitionistic logic, usually written as (logic) derivations for judgments of the form Γ ⊢ A, and well-typed terms in the simply-typed lambda-calculus, with (typing) derivations for the judgment Γ ⊢ t : A, is not one-to-one. In typing judgments Γ ⊢ t : A, the context Γ is a mapping from free variables to their type. In logic derivations, the context Γ is a set of hypotheses; there is no notion of variable, and at most one hypothesis of each type in the set. This means, for example, that the following logic derivation

  
 
A ⊢ A
A ⊢ A → A
∅ ⊢ A → A → A

corresponds to two distinct programs, namely λ (x) λ (yx and λ (x) λ (yy. We say that those programs have the same shape, in the sense that the erasure of their typing derivation gives the same logic derivation – and they are the only programs of this shape.

Despite, or because, not being one-to-one, this correspondence is very helpful to answer questions about type systems. For example, the question of whether, in a given typing environment Γ, the type A is inhabited, can be answered by looking instead for a valid logic derivation of ⌊Γ⌋ ⊢ A, where ⌊Γ⌋ denotes the erasure of the mapping Γ into a set of hypotheses. If we can independently prove that only a finite number of different types need to be considered to find a valid proof (this is the case for propositional logic because of the subformula property), then there are finitely many set-of-hypothesis Δ, and the search space of sequents Δ ⊢ B to consider during proof search is finite. This property is key to the termination of most search algorithms for the simply-typed lambda-calculus. Note that it would not work if we searched typing derivations Γ ⊢ t : A directly: even if there are finitely many types of interest, the set of mappings from variables to such types is infinite.

In our current brand of work, we are interested in a different problem. Instead of knowing whether there exists a term t such that Γ ⊢ t : A, we want to know whether this term is unique – modulo a given notion of program equivalence. Intuitively, this can be formulated as a search problem where search does not stop to the first candidate, but tries to find whether a second one (that is nonequivalent as a program) exists. In this setting, the technique of searching for logic derivations ⌊Γ⌋ ⊢ A instead is not enough, because a unique logic derivation may correspond to several distinct programs of this shape: summarizing typing environments as set-of-hypotheses loses information about (non)-unicity.

To better preserve this information, one could keep track of the number of times an hypothesis has been added to the context, representing contexts as multisets of hypotheses; given a logic derivation annotated with such counts in the context, we can precisely compute the number of programs of this shape. However, even for a finite number of types/formulas, the space of such multisets is infinite; this breaks termination arguments. A natural idea is then to approximate multisets by labeling hypotheses with 0 (not available in the context), 1 (added exactly once), or 2 (available two times or more); this two-or-more approximation has three possible states, and there are thus finitely many contexts annotated in this way.

The question we answer in this note is the following: is the two-or-more approximation correct? By correct, we mean that if the precise number of times a given hypothesis is available varies, but remains in the same approximation class, then the total number of programs of this shape may vary, but will itself remain in the same annotation class. A possible counter-example would be a logic derivation Δ ⊢ B such that, if a given hypothesis A ∈ Δ is present exactly twice in the context (or has two free variables of this type), there is one possible program of this shape, but having three copies of this hypothesis would lead to several distinct programs.

Is this approximation correct? We found it surprisingly difficult to have an intuition on this question (guessing what the answer should be), and discussions with colleagues indicate that there is no obvious guess – people have contradictory intuitions on this.

by Gabriel Scherer at November 25, 2014 08:00 AM

November 22, 2014

Andrej Bauer

A HoTT PhD position in Ljubljana

I am looking for a PhD student in mathematics. Full tuition & stipend will be provided for a period of three years, which is also the official length of the programme. The topic of research is somewhat flexible and varies from constructive models of homotopy type theory to development of a programming language for a proof assistant based on dependent type theory, see the short summary of the Effmath project for a more detailed description.

The candidate should have as many of the following desiderata as possible, and at the very least a master’s degree (or an equivalent one):

  1. a master’s degree in mathematics, with good knowledge of computer science
  2. a master’s degree in computer science, with good knowledge of mathematics
  3. experience with functional programming
  4. experience with proof assistants
  5. familiarity with homotopy type theory

The student will officially enrol in October 2014 at the University of Ljubljana. No knowledge of Slovene is required. However, it is possible, and even desirable, to start with the actual work (and stipend) earlier, as soon as in the spring of 2015. The candidates should contact me by email as soon as possible. Please include a short CV and a statement of interest.

by Andrej Bauer at November 22, 2014 12:16 PM

November 20, 2014

Yan Shvartzshnaider

Menhir



This blog post about  -  Menhir.


According to Wikipedia:

A menhir (French, from Middle Breton: men, "stone" and hir, "long"[1]), standing stone, orthostat, or lith is a large upright standing stone.
Coincidently, Menhir, is also the name for LR(1) parser generator for OCaml.
 
I followed the recommendation in the Real World OCaml to use it, rather than ocamlyacc:
"Menhir is an alternative parser generator that is generally superior to the venerable ocamlyacc, which dates back quite a few years. Menhir is mostly compatible with ocamlyacc grammars, and so you can usually just switch to Menhir and expect older code to work (with some minor differences described in the Menhir manual)

I found a few sources online that help you understand Menhir but it took me some time to get my head around it.

This blog (and this post in particular) is mainly for me to record my activities and a way to understand things better. Nevertheless, I hope that by going through and discussing the code I've written, will shorten the learning curve for some of you -  or the very least entertain you :)

On y vas! For my purposes, I started by parsing a simple n-tuple string, for the MoanaML code.
(?x,hasColor,?y,context)
Following the instructions in the Real World OCaml I knew that I had to create two files, a namely, Parser.mly and Lexer.mll

The parser file is used to construct and parse the grammar. You can define tokens and describe their required sequences.

For example, for the Moana tuple, I defined the following tokens:
%token <string> STRING
%token <string> VAR
%token LEFT_BRACE
%token RIGHT_BRACE
%token START
%token END
%token COMMA
%token EOF </string></string>
 I the used them to define the required parsing sequence:
LEFT_BRACE; s = elem; COMMA; p = elem; COMMA; o = elem; COMMA; c  = elem; RIGHT_BRACE 
elem:
 | v = VAR {Variable v}
 | c = STRING {Constant c} 

The elem is there to differentiate constants and variables and consequently pass the parsed value into a relevant type constructor.

You also define the parsing function and its return type
 start < config .tuple > parse
Now once we have the parser, we can move to the lexer file.  Here we define rules using regular expressions in order to match, capture and convert strings into the previously defined tokens.

In my case:

rule lex = parse
  | [' ' '\t' '\n']      { lex lexbuf }
  | newline         { next_line lexbuf; lex lexbuf }
  | ","                 { COMMA }
  | "("                 { LEFT_BRACE }
  | "{"                {START}
  | eof                {EOF }

To do the actual parsing, we use:
Parser.parse_tuple Lexer.lex (Lexing.from_string s)
the Real World OCaml: helps us understand what's happening
[Lexing.from_string] function is used to construct a lexbuf [from a string], which is passed with the lexing function [Lexer.lex] to the [Parser.parse] functions.
That's all, folks!

Pitfalls.

I was expecting Menhir to provide me with nice exceptions to debug my code, as was promised in the Real World OCaml:
The biggest advantage of Menhir is that its error messages are generally more human-comprehensible, ...
but it didn't. At least, I couldn't find the way to invoke it.
In any case, the book does provide you with some code to make the debugging a bit easier.

Anyway, online regular expression editor is your friend - I used http://www.regexr.com. Use it to test your regular expression and adjust the lexer accordingly.

One more thing, the order of your lexer rules matters!

Finally, in addition to the the Real World OCaml book chapter, I also found very useful this example and this Mini Ocaml tutorial.

That's about it, very simple and elegant once you get your head around it.

Disclaimer: As I mentioned in the beginning, I am a newbie, so please let me know if I got some of the stuff wrong or there is a more efficient way of doing things. I am more than happy to hear from you guys!

by Yan (noreply@blogger.com) at November 20, 2014 04:51 PM

November 18, 2014

Daniel Bünzli

Cmdliner 0.9.6

Release of Cmdliner 0.9.6, consult the release notes for details.

November 18, 2014 02:23 PM

Jane Street

How to choose a teaching language

If you were teaching a programming course, what language would you teach it in?

I like this question because it has any number of good answers, each quite different from the other, and each emblematic of a different approach to what programming is about.

The first formal programming class I took was COS 217 at Princeton, taught by the excellent (and at the time, I thought, terrifying) Anne Rogers. The course was (and is) taught in C, and the intellectual approach of the class was to start from the machine. Instead of just learning to program in C, we learned about how the machines we were programming to worked. That was where I first encountered instruction counters, stack frames, registers and the memory hierarchy. It was exhilarating.

Where C encourages you to start with the machine, Scheme wants you to start at the mathematical foundations of computation. You don't need to know what the lambda caluclus is to appreciate Scheme's slim core, and the way in which you can build a rich and vibrant computational world on top of it. That core is expressive enough that it makes it natural to introduce ideas that come up in multiple different languages, including functional and imperative techniques, object orientation, and logic programming.

The classic course in this vein is MIT's 6.001, also known as SICP, The Structure and Interpretation of Computer Programming. Sadly, 6.001 has been retired at MIT, but the book lives on, and is a worthwhile read even if you took your last CS course years ago.

MIT replaced SICP with a course based on Python, and this reflects a broader trend. As was highlighted by an informal study by Philip Guo, lots of schools now teach Python, particularly for early introductory courses. I have mixed feelings about this choice. Python is a wonderfully friendly language, but that friendliness is bundled together with some problems.

This was made apparent to me in part by my experience with students who chose to code in their interviews in Python. In many ways, Python is the ideal interview language, since its concise and readable syntax makes the constraints of coding on the whiteboard more bearable. But what I saw was that students who learn Python often walk away with a rather rough model of the semantics of the language. You might be surprised at what fraction of students who have programmed extensively in Python can't guess how Python lists might be implemented, to say nothing of their ability to explain the semantics of language features like generators or decorators.

This isn't really a knock on Python. After all, there's something great about a tool that lets you get things done without fully understanding how it works. But in different ways, both Scheme and C encourage you to understand what's going on from the ground up, and there's a certain pedagogical power in that. All in, I think Python is a great choice for an early introductory course, particularly one meant for those who aren't going to end up as computer scientists or full-time programmers. But I'd be leery of using it outside of those circumstances.

A development that I'm personally rather encouraged by is the emergence of statically typed functional languages, ML in particular, as teaching tools. Over the last few years, I've had the pleasure of visiting and lecturing at a number of schools that teach using OCaml or SML, including Brown, Cornell, Penn, Princeton, CMU and Harvard.

ML has gained ground for good reasons. First, it shares much of Scheme's elegant intellectual foundations, even though its core isn't quite as charmingly minimalistic as Scheme's. But ML has a wider range than Scheme because it allows you to show students the role of types in programming. Despite that greater range, OCaml and SML are relatively simple languages, which matters even more for teaching than it does for everyday use.

The only choice I've seen a lot of that I can't reconcile myself to is Java. Java is of course a widely used industrial language, but that doesn't mean it's a good teaching language. In my view, a key requirement for a teaching language is simplicity, and all the other choices I mentioned are simple in one way or another: C is a minimal layer on top of the machine; Scheme and ML are based on simple mathematical models of computation; and Python is a language that people find simple to use.

Java isn't simple in any of these ways. It's not particularly easy to get started with, as indicated by all the things you need to tell students to ignore rather than understand. (Yeah, public static void main, I'm looking at you!) Nor does it have a simple and transparent execution model like C. And an elegant computational core calculus like the one at the heart of Scheme and ML is nowhere in sight. The only real advantage I see for Java is vocational, and that doesn't seem to me like argument enough.

The thing to consider when you're picking a language to teach is that you're not just picking a bit of infrastructure for your students to program with during the course. You're picking an intellectual frame through which they will see all the lessons you teach them. And you should pick that frame with care.

by Yaron Minsky at November 18, 2014 12:05 AM

November 15, 2014

Mads Hartman

ocamldebug

When I first read about the time-travel feature of the ocaml debugger I was very intrigued but never got around to trying it out in practice at work. This weekend I decided to give it a go.

I wanted to try it out on a simple but non-trivial program (in terms of setup, i.e. a program that requires opam packages to compile) so I used the hello world example of the ocaml-cohttp library.

1 Configuration

Before the ocamldebugger can work with your programs they need to be compiled with the -g flag enabled. This is also required for any libraries that you use. You can configure the ocamlc parameters that opam uses by setting OCAMLPARAM; in this case you want to set it to _,g. You will want to run opam reinstall on any package you have previously installed to make sure they get recompiled.

The second thing you need to do is tell ocamldebug where to look for your compiled files. Unfortunately this is not as easy as it is with utop. Luckily I came across this stack overflow thread which explains how to do it. I ended up using a slightly different approach as you can't pass arguments to ocamldebug when running it from within Emacs (at least I couldn't get it to work). I decided to write the results of ocamlfind query -recursive core async cohttp cohttp.async into a file named .ocamldebug and prefix each line with directory. After starting the debugger in Emacs you then need to run source <PATH> to configure it.

2 Using the debugger

Starting the debugger from within Emacs is just a matter of running M-x ocamldebug and point it to the executable you want to debug.

You start the program by writing run in the ocamldebug buffer. You set break-points in your code using C-x C-a C-b and step through the code use C-c C-s. You can inspect values using C-x C-a C-p. For more information read this chapter of the OCaml Users Guide.

3 Shortcomings

Unfortunately I found a couple of shortcomings when I played around with the debugger for a bit.

3.1 No support for -pack'ed compilation units

Once you start setting some breakpoints and step through the code you're very likely to get an error like the following.

(ocd) step
Time: 1084742 - pc: 4416712 - module Cohttp.Request
No source file for Cohttp.Request.

This error had me puzzled for a bit as I thought I had already told ocamldebug where to look for my opam packages. I joined the #ocaml IRC channel and whitequark was able to come up with a potential explanation. It seems that ocamldebug doesn't support compilation units that contain submodules that have been added using -pack and -for-pack. Search for -pack in this section for the OCaml Users Guide for more information.

3.2 No arbitrary OCaml code execution

Once you hit a breakpoint it's possible to inspect the values of the variables in the current scope. This is great, but it would have been even better if you could execute arbitrary OCaml code in the current scope. This doesn't seem to be possible. To be fair this might be better classified as an awesome feature rather than a shortcoming.

November 15, 2014 09:00 PM

November 13, 2014

Xinuo Chen

The Magic of Thunk - Lazy

lazy

The dark side of thunk

As discussed previously, thunk is used to encapsulate computations for later uses. Although we may not evaluate a thunk yet, we know its value is determined. However, it has a weakness: when we invoke a thunk repeatedly, the same computation inside it gets carried out again and again, despite of the fact that the results returned are always identical. For example,

let fibonacci n =  
  let rec fib a b i =
    if i = n then a
    else fib b (a+b) (i+1)
  in 
  fib 0 1 0;;

let millionth_fib = fun() -> fibonacci 1000000;;

let show_the_fib fib = function  
  | true -> Some (fib())
  | flase -> None;;

let print_the_fib fib = function  
  | true -> print_int (fib())
  | flase -> ();;

show_the_fib and print_the_fib both use thunks, just in case that the arguments are always false (so the heavy computation would never be necessarily carried out). It is good. But what if show_the_fib millionth_fib true and print_the_fib millionth_fib true? Then actually millionth_fib will be evaluated twice. What if we give billionth_fib? What if show_the_fib millionth_fib true and/or print_the_fib millionth_fib true get called lots of times? Then the heavy computations will be done again and again. You may say in those worst cases, we just let millionth_fib() be done once anyway and bind it to a variable, i.e., don't use thunk. Yes, we can do that, however, it is not perfect as it is still a waste of CPU when arguments are always false. Thunk puts one gate there to make sure that the computation would not be touched if unnecessary, however, once it becomes necessary, the times of executions cannot be controlled.

lazy is there to make all those bad cases go away. It is essentially a thunk (delay the computation, first gate) with the feature of making sure that the computation would be executed at most once (control the times, second gate).

How to invent lazy?

lazy is actually a built-in keyword of OCaml for you to create a lazy_t. There is a module Lazy there for you to fully utilise lazy. However, in this section, we will pretend lazy never exists and try to invent it. In this way, the understanding of lazy would hopefully become deeper. Let's get started.

Assume we have let millionth_fib = fun() -> fibonacci 1000000;; as before. Now we want to have millionth_fib_lazy. How can we add the second gate to stop the repeated computations, i.e., as long as it is computed once, how to return the existing result directly? Let's rephrase it more clearly:

  1. millionth_fib_lazy needs to be thunk when never evaluated.
  2. millionth_fib_lazy needs to become an int immediately after first time evaluation.
  3. Future attempts of accessing it will get the int back directly.

The key part is from step 1 to step 2: the transition between two different types. So we have two fundamental problems to solve now:

  • millionth_fib_lazy of course must belong to a type and we can call it lazy_node_t. Also ideally lazy_node_t must include two other types: thunk and int (int is particularly for this case and we can have 'a for general purposes). We do not name the type as lazy_t yet because of the other problem below.

  • One type (thunk) must be able to transit to the other (int), but not the other way around

lazy_node_t - Multiple types in one type

Variants can do this job. It combines multiple types into one and still stands as one single type to OCaml's type system. If you learn or already know variant type, then it is straightforward to write the type lazy_node_t:

type 'a lazy_node_t =  
  | Delayed of (unit -> 'a)
  | Value of 'a;;

So a lazy can initially be Delayed of a thunk. Then after first time evaluation, it becomes a Value of 'a. In the case of millionth_fib_lazy, it is initially Delayed (fun() -> fibonacci 1000000), then later on at some point, it becomes Value 1884755131. Are we done? No! Although millionthfiblazy now can be two things, we still need to handle the transition.

lazy_t - ref

Remember one fact that in OCaml world, being immutable is the default. So even if we somehow manage to evaluation Delayed inside millionthfiblazy and return the result as Value, it will be given as a new binding and will not change millionthfiblazy. This is not what we want, instead, we want to change the binding/variable millionthfiblazy itself. So wherever it is used, it has been changed already. Thus we have to use ref and here comes our final lazy_t:

type 'a lazy_node_t =  
  | Delayed of (unit -> 'a)
  | Value of 'a
  | Exn of exn;; (* we add exception type here just in case the thunk might produce exception *)

type 'a lazy_t = 'a lazy_node_t ref;;

let lazy_from_fun f = ref (Delayed f);; (* create a lazy_t from a thunk *)

Now we are ready to make the transitions more concrete.

let force x_lazy =  
  match !x_lazy with
    | Value x -> x
    | Exn e -> raise e
    | Delayed f -> 
      try
        let r = f() in
        x_lazy := Value r;
        r
      with e -> 
        x_lazy := Exn e;
        raise e;;

force is used to evaluate lazy_t. The logic is simple:

  1. If the lazy is Value already, just return the result
  2. If the lazy is Exn, then raise it
  3. Otherwise, we change the lazy to either Value of Exn accordingly first, then return or raise

Now the two fundamental problems mentioned before are solved. Let's try it:

let millionth_fib_lazy = lazy_from_fun (fun() -> print_endline "calculate millionth fib";fibonacci 1000000);;

let show_the_fib fib = function  
  | true -> Some (force fib)
  | flase -> None;;

let print_the_fib fib = function  
  | true -> print_int (force fib)
  | flase -> ();;

show_the_fib millionth_fib_lazy true;;  
print_the_fib millionth_fib_lazy true;;  
show_the_fib millionth_fib_lazy true;;  
show_the_fib millionth_fib_lazy true;;

calculate millionth fib  
int option = Some 1884755131  
1884755131  
int option = Some 1884755131  
int option = Some 1884755131

We can see calculate millionth fib is printed only once. It seems working! So are we done? No, still no!

Thread safe

OCaml is thread safe as long as you keep everything in the functional way obviously. However, whenever you have to use imperative way, thread safety must be kept in mind.

Like in above lazy, it uses ref the typical imperative type and it is thread dangerous. Let's say we have a lazy inside which the thunk takes 1 hour to finish. Two threads are using it. When the first thread tries to force it, it sees Delayed f, then carry on the computation slowly. After some time fraction, when the second thread is activated at some point and tries to force it, it also sees Delayed f, right? So eventually, two evaluations of the same thunk of the same lazy has been done. The purpose of lazy collapses, doesn't it?

Quiz 1: what is the simplest way to provide thread safe?

<details>
<summary>A trivial answer</summary>
A lock
</details>

Mutex provides essential thread safe in OCaml. Let's quickly try it.

First of all, each lazy needs a mutex dedicated for itself and with this, all access to the lazy are guarded.

type 'a lazy_t = 'a lazy_node_t ref * Mutex.t;;

let lazy_from_fun f = ref (Delayed f), Mutex.create();; (* create a lazy_t from a thunk with a brand new mutex *)

Then when we force it, we need to lock / unlock.

let force x_lazy =  
  let x_lazy_part, m = x_lazy in
  Mutex.lock m;
  match !x_lazy_part with
    | Value x -> Mutex.unlock m; x
    | Exn e -> Mutex.unlock m; raise e
    | Delayed f -> 
      try
        let r = f() in
        x_lazy_part := Value r;
        Mutex.unlock m;
        r
      with e -> 
        x_lazy_part := Exn e;
        Mutex.unlock m;
        raise e;;
(* A bit ugly as Mutex.unlock m is everywhere.*)

That's it. Thread is now safe with lazy.

Let's ask again:

Are we done?

Yes, we are done. Now beer time!

by Jackson Tale at November 13, 2014 11:00 PM