This manual documents the release 5.0 of the OCaml system. It is organized as follows.
OCaml runs on several operating systems. The parts of this manual that are specific to one operating system are presented as shown below:
Unix:â This is material specific to the Unix family of operating systems, including Linux and macOS.
Windows:â This is material specific to Microsoft Windows (Vista, 7, 8, 10).
The OCaml system is copyright © 1996â2022 Institut National de Recherche en Informatique et en Automatique (INRIA). INRIA holds all ownership rights to the OCaml system.
The OCaml system is open source and can be freely redistributed. See the file LICENSE in the distribution for licensing information.
The OCaml documentation and userâs manual is copyright © 2022 Institut National de Recherche en Informatique et en Automatique (INRIA).
The OCaml documentation and user's manual is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
The complete OCaml distribution can be accessed via the ocaml.org website. This site contains a lot of additional information on OCaml.
PartââI |
This part of the manual is a tutorial introduction to the OCaml language. A good familiarity with programming in a conventional languages (say, C or Java) is assumed, but no prior exposure to functional languages is required. The present chapter introduces the core language. Chapterââ2 deals with the module system, chapterââ3 with the object-oriented features, chapterââ4 with labeled arguments, chapterââ5 with polymorphic variants, chapterââ6 with the limitations of polymorphism, and chapterââ8 gives some advanced examples.
For this overview of OCaml, we use the interactive system, which is started by running ocaml from the Unix shell or Windows command prompt. This tutorial is presented as the transcript of a session with the interactive system: lines starting with # represent user input; the system responses are printed below, without a leading #.
Under the interactive system, the user types OCaml phrases terminated by ;; in response to the # prompt, and the system compiles them on the fly, executes them, and prints the outcome of evaluation. Phrases are either simple expressions, or let definitions of identifiers (either values or functions).
The OCaml system computes both the value and the type for each phrase. Even function parameters need no explicit type declaration: the system infers their types from their usage in the function. Notice also that integers and floating-point numbers are distinct types, with distinct operators: + and * operate on integers, but +. and *. operate on floats.
Recursive functions are defined with the let rec binding:
In addition to integers and floating-point numbers, OCaml offers the usual basic data types:
Predefined data structures include tuples, arrays, and lists. There are also general mechanisms for defining your own data structures, such as records and variants, which will be covered in more detail later; for now, we concentrate on lists. Lists are either given in extension as a bracketed list of semicolon-separated elements, or built from the empty list [] (pronounce ânilâ) by adding elements in front using the :: (âconsâ) operator.
As with all other OCaml data structures, lists do not need to be explicitly allocated and deallocated from memory: all memory management is entirely automatic in OCaml. Similarly, there is no explicit handling of pointers: the OCaml compiler silently introduces pointers where necessary.
As with most OCaml data structures, inspecting and destructuring lists is performed by pattern-matching. List patterns have exactly the same form as list expressions, with identifiers representing unspecified parts of the list. As an example, here is insertion sort on a list:
The type inferred for sort, 'a list -> 'a list, means that sort can actually apply to lists of any type, and returns a list of the same type. The type 'a is a type variable, and stands for any given type. The reason why sort can apply to lists of any type is that the comparisons (=, <=, etc.) are polymorphic in OCaml: they operate between any two values of the same type. This makes sort itself polymorphic over all list types.
The sort function above does not modify its input list: it builds and returns a new list containing the same elements as the input list, in ascending order. There is actually no way in OCaml to modify a list in-place once it is built: we say that lists are immutable data structures. Most OCaml data structures are immutable, but a few (most notably arrays) are mutable, meaning that they can be modified in-place at any time.
The OCaml notation for the type of a function with multiple arguments is
arg1_type -> arg2_type -> ... -> return_type. For example,
the type inferred for insert, 'a -> 'a list -> 'a list, means that insert
takes two arguments, an element of any type 'a and a list with elements of
the same type 'a and returns a list of the same type.
OCaml is a functional language: functions in the full mathematical sense are supported and can be passed around freely just as any other piece of data. For instance, here is a deriv function that takes any float function as argument and returns an approximation of its derivative function:
Even function composition is definable:
Functions that take other functions as arguments are called âfunctionalsâ, or âhigher-order functionsâ. Functionals are especially useful to provide iterators or similar generic operations over a data structure. For instance, the standard OCaml library provides a List.map functional that applies a given function to each element of a list, and returns the list of the results:
This functional, along with a number of other list and array functionals, is predefined because it is often useful, but there is nothing magic with it: it can easily be defined as follows.
User-defined data structures include records and variants. Both are defined with the type declaration. Here, we declare a record type to represent rational numbers.
Record fields can also be accessed through pattern-matching:
Since there is only one case in this pattern matching, it is safe to expand directly the argument r in a record pattern:
Unneeded fields can be omitted:
Optionally, missing fields can be made explicit by ending the list of fields with a trailing wildcard _::
When both sides of the = sign are the same, it is possible to avoid repeating the field name by eliding the =field part:
This short notation for fields also works when constructing records:
At last, it is possible to update few fields of a record at once:
With this functional update notation, the record on the left-hand side of with is copied except for the fields on the right-hand side which are updated.
The declaration of a variant type lists all possible forms for values of that type. Each case is identified by a name, called a constructor, which serves both for constructing values of the variant type and inspecting them by pattern-matching. Constructor names are capitalized to distinguish them from variable names (which must start with a lowercase letter). For instance, here is a variant type for doing mixed arithmetic (integers and floats):
This declaration expresses that a value of type number is either an integer, a floating-point number, or the constant Error representing the result of an invalid operation (e.g. a division by zero).
Enumerated types are a special case of variant types, where all alternatives are constants:
To define arithmetic operations for the number type, we use pattern-matching on the two numbers involved:
Another interesting example of variant type is the built-in 'a option type which represents either a value of type 'a or an absence of value:
This type is particularly useful when defining function that can fail in common situations, for instance
The most common usage of variant types is to describe recursive data structures. Consider for example the type of binary trees:
This definition reads as follows: a binary tree containing values of type 'a (an arbitrary type) is either empty, or is a node containing one value of type 'a and two subtrees also containing values of type 'a, that is, two 'a btree.
Operations on binary trees are naturally expressed as recursive functions following the same structure as the type definition itself. For instance, here are functions performing lookup and insertion in ordered binary trees (elements increase from left to right):
( This subsection can be skipped on the first reading )
Astute readers may have wondered what happens when two or more record fields or constructors share the same name
The answer is that when confronted with multiple options, OCaml tries to use locally available information to disambiguate between the various fields and constructors. First, if the type of the record or variant is known, OCaml can pick unambiguously the corresponding field or constructor. For instance:
In the first example, (r:first_record) is an explicit annotation telling OCaml that the type of r is first_record. With this annotation, Ocaml knows that r.x refers to the x field of the first record type. Similarly, the type annotation in the second example makes it clear to OCaml that the constructors A, B and C come from the first variant type. Contrarily, in the last example, OCaml has inferred by itself that the type of r can only be first_record and there are no needs for explicit type annotations.
Those explicit type annotations can in fact be used anywhere. Most of the time they are unnecessary, but they are useful to guide disambiguation, to debug unexpected type errors, or combined with some of the more advanced features of OCaml described in later chapters.
Secondly, for records, OCaml can also deduce the right record type by looking at the whole set of fields used in a expression or pattern:
Since the fields x and y can only appear simultaneously in the first record type, OCaml infers that the type of project_and_rotate is first_record -> first_record.
In last resort, if there is not enough information to disambiguate between different fields or constructors, Ocaml picks the last defined type amongst all locally valid choices:
Here, OCaml has inferred that the possible choices for the type of {x;z} are first_record and middle_record, since the type last_record has no field z. Ocaml then picks the type middle_record as the last defined type between the two possibilities.
Beware that this last resort disambiguation is local: once Ocaml has chosen a disambiguation, it sticks to this choice, even if it leads to an ulterior type error:
Moreover, being the last defined type is a quite unstable position that may change surreptitiously after adding or moving around a type definition, or after opening a module (see chapter 2). Consequently, adding explicit type annotations to guide disambiguation is more robust than relying on the last defined type disambiguation.
Though all examples so far were written in purely applicative style, OCaml is also equipped with full imperative features. This includes the usual while and for loops, as well as mutable data structures such as arrays. Arrays are either created by listing semicolon-separated element values between [| and |] brackets, or allocated and initialized with the Array.make function, then filled up later by assignments. For instance, the function below sums two vectors (represented as float arrays) componentwise.
Record fields can also be modified by assignment, provided they are declared mutable in the definition of the record type:
OCaml has no built-in notion of variable â identifiers whose current value can be changed by assignment. (The let binding is not an assignment, it introduces a new identifier with a new scope.) However, the standard library provides references, which are mutable indirection cells, with operators ! to fetch the current contents of the reference and := to assign the contents. Variables can then be emulated by let-binding a reference. For instance, here is an in-place insertion sort over arrays:
References are also useful to write functions that maintain a current state between two calls to the function. For instance, the following pseudo-random number generator keeps the last returned number in a reference:
Again, there is nothing magical with references: they are implemented as a single-field mutable record, as follows.
In some special cases, you may need to store a polymorphic function in a data structure, keeping its polymorphism. Doing this requires user-provided type annotations, since polymorphism is only introduced automatically for global definitions. However, you can explicitly give polymorphic types to record fields.
OCaml provides exceptions for signalling and handling exceptional conditions. Exceptions can also be used as a general-purpose non-local control structure, although this should not be overused since it can make the code harder to understand. Exceptions are declared with the exception construct, and signalled with the raise operator. For instance, the function below for taking the head of a list uses an exception to signal the case where an empty list is given.
Exceptions are used throughout the standard library to signal cases where the library functions cannot complete normally. For instance, the List.assoc function, which returns the data associated with a given key in a list of (key, data) pairs, raises the predefined exception Not_found when the key does not appear in the list:
Exceptions can be trapped with the tryâŠwith construct:
The with part does pattern matching on the exception value with the same syntax and behavior as match. Thus, several exceptions can be caught by one tryâŠwith construct:
Also, finalization can be performed by trapping all exceptions, performing the finalization, then re-raising the exception:
An alternative to tryâŠwith is to catch the exception while pattern matching:
Note that this construction is only useful if the exception is raised between matchâŠwith. Exception patterns can be combined with ordinary patterns at the toplevel,
but they cannot be nested inside other patterns. For instance, the pattern Some (exception A) is invalid.
When exceptions are used as a control structure, it can be useful to make them as local as possible by using a locally defined exception. For instance, with
the function f cannot raise a Done exception, which removes an entire class of misbehaving functions.
OCaml allows us to defer some computation until later when we need the result of that computation.
We use lazy (expr) to delay the evaluation of some expression expr. For example, we can defer the computation of 1+1 until we need the result of that expression, 2. Let us see how we initialize a lazy expression.
We added print_endline "lazy_two evaluation" to see when the lazy expression is being evaluated.
The value of lazy_two is displayed as <lazy>, which means the expression has not been evaluated yet, and its final value is unknown.
Note that lazy_two has type int lazy_t. However, the type 'a lazy_t is an internal type name, so the type 'a Lazy.t should be preferred when possible.
When we finally need the result of a lazy expression, we can call Lazy.force on that expression to force its evaluation. The function force comes from standard-library module Lazy.
Notice that our function call above prints âlazy_two evaluationâ and then returns the plain value of the computation.
Now if we look at the value of lazy_two, we see that it is not displayed as <lazy> anymore but as lazy 2.
This is because Lazy.force memoizes the result of the forced expression. In other words, every subsequent call of Lazy.force on that expression returns the result of the first computation without recomputing the lazy expression. Let us force lazy_two once again.
The expression is not evaluated this time; notice that âlazy_two evaluationâ is not printed. The result of the initial computation is simply returned.
Lazy patterns provide another way to force a lazy expression.
We can also use lazy patterns in pattern matching.
The lazy expression lazy_expr is forced only if the lazy_guard value yields true once computed. Indeed, a simple wildcard pattern (not lazy) never forces the lazy expressionâs evaluation. However, a pattern with keyword lazy, even if it is wildcard, always forces the evaluation of the deferred computation.
We finish this introduction with a more complete example representative of the use of OCaml for symbolic processing: formal manipulations of arithmetic expressions containing variables. The following variant type describes the expressions we shall manipulate:
We first define a function to evaluate an expression given an environment that maps variable names to their values. For simplicity, the environment is represented as an association list.
Now for a real symbolic processing, we define the derivative of an expression with respect to a variable dv:
As shown in the examples above, the internal representation (also called abstract syntax) of expressions quickly becomes hard to read and write as the expressions get larger. We need a printer and a parser to go back and forth between the abstract syntax and the concrete syntax, which in the case of expressions is the familiar algebraic notation (e.g. 2*x+1).
For the printing function, we take into account the usual precedence rules (i.e. * binds tighter than +) to avoid printing unnecessary parentheses. To this end, we maintain the current operator precedence and print parentheses around an operator only if its precedence is less than the current precedence.
There is a printf function in the Printf module (see chapterââ2) that allows you to make formatted output more concisely. It follows the behavior of the printf function from the C standard library. The printf function takes a format string that describes the desired output as a text interspersed with specifiers (for instance %d, %f). Next, the specifiers are substituted by the following arguments in their order of apparition in the format string:
The OCaml type system checks that the type of the arguments and the specifiers are compatible. If you pass it an argument of a type that does not correspond to the format specifier, the compiler will display an error message:
The fprintf function is like printf except that it takes an output channel as the first argument. The %a specifier can be useful to define custom printers (for custom types). For instance, we can create a printing template that converts an integer argument to signed decimal:
The advantage of those printers based on the %a specifier is that they can be composed together to create more complex printers step by step. We can define a combinator that can turn a printer for 'a type into a printer for 'a optional:
If the value of its argument its None, the printer returned by pp_option printer prints None otherwise it uses the provided printer to print Some .
Here is how to rewrite the pretty-printer using fprintf:
Due to the way that format strings are built, storing a format string requires an explicit type annotation:
All examples given so far were executed under the interactive system. OCaml code can also be compiled separately and executed non-interactively using the batch compilers ocamlc and ocamlopt. The source code must be put in a file with extension .ml. It consists of a sequence of phrases, which will be evaluated at runtime in their order of appearance in the source file. Unlike in interactive mode, types and values are not printed automatically; the program must call printing functions explicitly to produce some output. The ;; used in the interactive examples is not required in source files created for use with OCaml compilers, but can be helpful to mark the end of a top-level expression unambiguously even when there are syntax errors. Here is a sample standalone program to print the greatest common divisor (gcd) of two numbers:
(* File gcd.ml *) let rec gcd a b = if b = 0 then a else gcd b (a mod b);; let main () = let a = int_of_string Sys.argv.(1) in let b = int_of_string Sys.argv.(2) in Printf.printf "%d\n" (gcd a b); exit 0;; main ();;
Sys.argv is an array of strings containing the command-line parameters. Sys.argv.(1) is thus the first command-line parameter. The program above is compiled and executed with the following shell commands:
$ ocamlc -o gcd gcd.ml $ ./gcd 6 9 3 $ ./gcd 7 11 1
More complex standalone OCaml programs are typically composed of multiple source files, and can link with precompiled libraries. Chaptersââ13 andââ16 explain how to use the batch compilers ocamlc and ocamlopt. Recompilation of multi-file OCaml projects can be automated using third-party build systems, such as dune.
This chapter introduces the module system of OCaml.
A primary motivation for modules is to package together related definitions (such as the definitions of a data type and associated operations over that type) and enforce a consistent naming scheme for these definitions. This avoids running out of names or accidentally confusing names. Such a package is called a structure and is introduced by the structâŠend construct, which contains an arbitrary sequence of definitions. The structure is usually given a name with the module binding. For instance, here is a structure packaging together a type of priority queues and their operations:
Outside the structure, its components can be referred to using the âdot notationâ, that is, identifiers qualified by a structure name. For instance, PrioQueue.insert is the function insert defined inside the structure PrioQueue and PrioQueue.queue is the type queue defined in PrioQueue.
Another possibility is to open the module, which brings all identifiers defined inside the module in the scope of the current structure.
Opening a module enables lighter access to its components, at the cost of making it harder to identify in which module an identifier has been defined. In particular, opened modules can shadow identifiers present in the current scope, potentially leading to confusing errors:
A partial solution to this conundrum is to open modules locally, making the components of the module available only in the concerned expression. This can also make the code both easier to read (since the open statement is closer to where it is used) and easier to refactor (since the code fragment is more self-contained). Two constructions are available for this purpose:
and
In the second form, when the body of a local open is itself delimited by parentheses, braces or bracket, the parentheses of the local open can be omitted. For instance,
becomes
This second form also works for patterns:
It is also possible to copy the components of a module inside another module by using an include statement. This can be particularly useful to extend existing modules. As an illustration, we could add functions that return an optional value rather than an exception when the priority queue is empty.
Signatures are interfaces for structures. A signature specifies which components of a structure are accessible from the outside, and with which type. It can be used to hide some components of a structure (e.g. local function definitions) or export some components with a restricted type. For instance, the signature below specifies the three priority queue operations empty, insert and extract, but not the auxiliary function remove_top. Similarly, it makes the queue type abstract (by not providing its actual representation as a concrete type).
Restricting the PrioQueue structure by this signature results in another view of the PrioQueue structure where the remove_top function is not accessible and the actual representation of priority queues is hidden:
The restriction can also be performed during the definition of the structure, as in
module PrioQueue = (struct ... end : PRIOQUEUE);;
An alternate syntax is provided for the above:
module PrioQueue : PRIOQUEUE = struct ... end;;
Like for modules, it is possible to include a signature to copy its components inside the current signature. For instance, we can extend the PRIOQUEUE signature with the extract_opt function:
Functors are âfunctionsâ from modules to modules. Functors let you create parameterized modules and then provide other modules as parameter(s) to get a specific implementation. For instance, a Set module implementing sets as sorted lists could be parameterized to work with any module that provides an element type and a comparison function compare (such as OrderedString):
By applying the Set functor to a structure implementing an ordered type, we obtain set operations for this type:
As in the PrioQueue example, it would be good style to hide the actual implementation of the type set, so that users of the structure will not rely on sets being lists, and we can switch later to another, more efficient representation of sets without breaking their code. This can be achieved by restricting Set by a suitable functor signature:
In an attempt to write the type constraint above more elegantly, one may wish to name the signature of the structure returned by the functor, then use that signature in the constraint:
The problem here is that SET specifies the type element abstractly, so that the type equality between element in the result of the functor and t in its argument is forgotten. Consequently, WrongStringSet.element is not the same type as string, and the operations of WrongStringSet cannot be applied to strings. As demonstrated above, it is important that the type element in the signature SET be declared equal to Elt.t; unfortunately, this is impossible above since SET is defined in a context where Elt does not exist. To overcome this difficulty, OCaml provides a with type construct over signatures that allows enriching a signature with extra type equalities:
As in the case of simple structures, an alternate syntax is provided for defining functors and restricting their result:
module AbstractSet2(Elt: ORDERED_TYPE) : (SET with type element = Elt.t) = struct ... end;;
Abstracting a type component in a functor result is a powerful technique that provides a high degree of type safety, as we now illustrate. Consider an ordering over character strings that is different from the standard ordering implemented in the OrderedString structure. For instance, we compare strings without distinguishing upper and lower case.
Note that the two types AbstractStringSet.set and NoCaseStringSet.set are not compatible, and values of these two types do not match. This is the correct behavior: even though both set types contain elements of the same type (strings), they are built upon different orderings of that type, and different invariants need to be maintained by the operations (being strictly increasing for the standard ordering and for the case-insensitive ordering). Applying operations from AbstractStringSet to values of type NoCaseStringSet.set could give incorrect results, or build lists that violate the invariants of NoCaseStringSet.
All examples of modules so far have been given in the context of the interactive system. However, modules are most useful for large, batch-compiled programs. For these programs, it is a practical necessity to split the source into several files, called compilation units, that can be compiled separately, thus minimizing recompilation after changes.
In OCaml, compilation units are special cases of structures and signatures, and the relationship between the units can be explained easily in terms of the module system. A compilation unit A comprises two files:
These two files together define a structure named A as if the following definition was entered at top-level:
module A: sig (* contents of file A.mli *) end = struct (* contents of file A.ml *) end;;
The files that define the compilation units can be compiled separately using the ocamlc -c command (the -c option means âcompile only, do not try to linkâ); this produces compiled interface files (with extension .cmi) and compiled object code files (with extension .cmo). When all units have been compiled, their .cmo files are linked together using the ocamlc command. For instance, the following commands compile and link a program composed of two compilation units Aux and Main:
$ ocamlc -c Aux.mli # produces aux.cmi $ ocamlc -c Aux.ml # produces aux.cmo $ ocamlc -c Main.mli # produces main.cmi $ ocamlc -c Main.ml # produces main.cmo $ ocamlc -o theprogram Aux.cmo Main.cmo
The program behaves exactly as if the following phrases were entered at top-level:
module Aux: sig (* contents of Aux.mli *) end = struct (* contents of Aux.ml *) end;; module Main: sig (* contents of Main.mli *) end = struct (* contents of Main.ml *) end;;
In particular, Main can refer to Aux: the definitions and declarations contained in Main.ml and Main.mli can refer to definition in Aux.ml, using the Aux.ident notation, provided these definitions are exported in Aux.mli.
The order in which the .cmo files are given to ocamlc during the linking phase determines the order in which the module definitions occur. Hence, in the example above, Aux appears first and Main can refer to it, but Aux cannot refer to Main.
Note that only top-level structures can be mapped to separately-compiled files, but neither functors nor module types. However, all module-class objects can appear as components of a structure, so the solution is to put the functor or module type inside a structure, which can then be mapped to a file.
(Chapter written by JĂ©rĂŽme Vouillon, Didier RĂ©my and Jacques Garrigue)
This chapter gives an overview of the object-oriented features of OCaml.
Note that the relationship between object, class and type in OCaml is different than in mainstream object-oriented languages such as Java and C++, so you shouldnât assume that similar keywords mean the same thing. Object-oriented features are used much less frequently in OCaml than in those languages. OCaml has alternatives that are often more appropriate, such as modules and functors. Indeed, many OCaml programs do not use objects at all.
The class point below defines one instance variable x and two methods get_x and move. The initial value of the instance variable is 0. The variable x is declared mutable, so the method move can change its value.
We now create a new point p, instance of the point class.
Note that the type of p is point. This is an abbreviation automatically defined by the class definition above. It stands for the object type <get_x : int; move : int -> unit>, listing the methods of class point along with their types.
We now invoke some methods of p:
The evaluation of the body of a class only takes place at object creation time. Therefore, in the following example, the instance variable x is initialized to different values for two different objects.
The class point can also be abstracted over the initial values of the x coordinate.
Like in function definitions, the definition above can be abbreviated as:
An instance of the class point is now a function that expects an initial parameter to create a point object:
The parameter x_init is, of course, visible in the whole body of the definition, including methods. For instance, the method get_offset in the class below returns the position of the object relative to its initial position.
Expressions can be evaluated and bound before defining the object body of the class. This is useful to enforce invariants. For instance, points can be automatically adjusted to the nearest point on a grid, as follows:
(One could also raise an exception if the x_init coordinate is not on the grid.) In fact, the same effect could be obtained here by calling the definition of class point with the value of the origin.
An alternate solution would have been to define the adjustment in a special allocation function:
However, the former pattern is generally more appropriate, since the code for adjustment is part of the definition of the class and will be inherited.
This ability provides class constructors as can be found in other languages. Several constructors can be defined this way to build objects of the same class but with different initialization patterns; an alternative is to use initializers, as described below in sectionââ3.4.
There is another, more direct way to create an object: create it without going through a class.
The syntax is exactly the same as for class expressions, but the result is a single object rather than a class. All the constructs described in the rest of this section also apply to immediate objects.
Unlike classes, which cannot be defined inside an expression, immediate objects can appear anywhere, using variables from their environment.
Immediate objects have two weaknesses compared to classes: their types are not abbreviated, and you cannot inherit from them. But these two weaknesses can be advantages in some situations, as we will see in sectionsââ3.3 andââ3.10.
A method or an initializer can invoke methods on self (that is, the current object). For that, self must be explicitly bound, here to the variable s (s could be any identifier, even though we will often choose the name self.)
Dynamically, the variable s is bound at the invocation of a method. In particular, when the class printable_point is inherited, the variable s will be correctly bound to the object of the subclass.
A common problem with self is that, as its type may be extended in subclasses, you cannot fix it in advance. Here is a simple example.
You can ignore the first two lines of the error message. What matters is the last one: putting self into an external reference would make it impossible to extend it through inheritance. We will see in sectionââ3.12 a workaround to this problem. Note however that, since immediate objects are not extensible, the problem does not occur with them.
Let-bindings within class definitions are evaluated before the object is constructed. It is also possible to evaluate an expression immediately after the object has been built. Such code is written as an anonymous hidden method called an initializer. Therefore, it can access self and the instance variables.
Initializers cannot be overridden. On the contrary, all initializers are evaluated sequentially. Initializers are particularly useful to enforce invariants. Another example can be seen in sectionââ8.1.
It is possible to declare a method without actually defining it, using the keyword virtual. This method will be provided later in subclasses. A class containing virtual methods must be flagged virtual, and cannot be instantiated (that is, no object of this class can be created). It still defines type abbreviations (treating virtual methods as other methods.)
Instance variables can also be declared as virtual, with the same effect as with methods.
Private methods are methods that do not appear in object interfaces. They can only be invoked from other methods of the same object.
Note that this is not the same thing as private and protected methods in Java or C++, which can be called from other objects of the same class. This is a direct consequence of the independence between types and classes in OCaml: two unrelated classes may produce objects of the same type, and there is no way at the type level to ensure that an object comes from a specific class. However a possible encoding of friend methods is given in sectionââ3.17.
Private methods are inherited (they are by default visible in subclasses), unless they are hidden by signature matching, as described below.
Private methods can be made public in a subclass.
The annotation virtual here is only used to mention a method without providing its definition. Since we didnât add the private annotation, this makes the method public, keeping the original definition.
An alternative definition is
The constraint on selfâs type is requiring a public move method, and this is sufficient to override private.
One could think that a private method should remain private in a subclass. However, since the method is visible in a subclass, it is always possible to pick its code and define a method of the same name that runs that code, so yet another (heavier) solution would be:
Of course, private methods can also be virtual. Then, the keywords must appear in this order: method private virtual.
Class interfaces are inferred from class definitions. They may also be defined directly and used to restrict the type of a class. Like class declarations, they also define a new type abbreviation.
In addition to program documentation, class interfaces can be used to constrain the type of a class. Both concrete instance variables and concrete private methods can be hidden by a class type constraint. Public methods and virtual members, however, cannot.
Or, equivalently:
The interface of a class can also be specified in a module signature, and used to restrict the inferred signature of a module.
We illustrate inheritance by defining a class of colored points that inherits from the class of points. This class has all instance variables and all methods of class point, plus a new instance variable c and a new method color.
A point and a colored point have incompatible types, since a point has no method color. However, the function get_x below is a generic function applying method get_x to any object p that has this method (and possibly some others, which are represented by an ellipsis in the type). Thus, it applies to both points and colored points.
Methods need not be declared previously, as shown by the example:
Multiple inheritance is allowed. Only the last definition of a method is kept: the redefinition in a subclass of a method that was visible in the parent class overrides the definition in the parent class. Previous definitions of a method can be reused by binding the related ancestor. Below, super is bound to the ancestor printable_point. The name super is a pseudo value identifier that can only be used to invoke a super-class method, as in super#print.
A private method that has been hidden in the parent class is no longer visible, and is thus not overridden. Since initializers are treated as private methods, all initializers along the class hierarchy are evaluated, in the order they are introduced.
Note that for clarityâs sake, the method print is explicitly marked as overriding another definition by annotating the method keyword with an exclamation mark !. If the method print were not overriding the print method of printable_point, the compiler would raise an error:
This explicit overriding annotation also works for val and inherit:
Reference cells can be implemented as objects. The naive definition fails to typecheck:
The reason is that at least one of the methods has a polymorphic type (here, the type of the value stored in the reference cell), thus either the class should be parametric, or the method type should be constrained to a monomorphic type. A monomorphic instance of the class could be defined by:
Note that since immediate objects do not define a class type, they have no such restriction.
On the other hand, a class for polymorphic references must explicitly list the type parameters in its declaration. Class type parameters are listed between [ and ]. The type parameters must also be bound somewhere in the class body by a type constraint.
The type parameter in the declaration may actually be constrained in the body of the class definition. In the class type, the actual value of the type parameter is displayed in the constraint clause.
Let us consider a more complex example: define a circle, whose center may be any kind of point. We put an additional type constraint in method move, since no free variables must remain unaccounted for by the class type parameters.
An alternate definition of circle, using a constraint clause in the class definition, is shown below. The type #point used below in the constraint clause is an abbreviation produced by the definition of class point. This abbreviation unifies with the type of any object belonging to a subclass of class point. It actually expands to < get_x : int; move : int -> unit; .. >. This leads to the following alternate definition of circle, which has slightly stronger constraints on its argument, as we now expect center to have a method get_x.
The class colored_circle is a specialized version of class circle that requires the type of the center to unify with #colored_point, and adds a method color. Note that when specializing a parameterized class, the instance of type parameter must always be explicitly given. It is again written between [ and ].
While parameterized classes may be polymorphic in their contents, they are not enough to allow polymorphism of method use.
A classical example is defining an iterator.
At first look, we seem to have a polymorphic iterator, however this does not work in practice.
Our iterator works, as shows its first use for summation. However, since objects themselves are not polymorphic (only their constructors are), using the fold method fixes its type for this individual object. Our next attempt to use it as a string iterator fails.
The problem here is that quantification was wrongly located: it is not the class we want to be polymorphic, but the fold method. This can be achieved by giving an explicitly polymorphic type in the method definition.
As you can see in the class type shown by the compiler, while polymorphic method types must be fully explicit in class definitions (appearing immediately after the method name), quantified type variables can be left implicit in class descriptions. Why require types to be explicit? The problem is that (int -> int -> int) -> int -> int would also be a valid type for fold, and it happens to be incompatible with the polymorphic type we gave (automatic instantiation only works for toplevel types variables, not for inner quantifiers, where it becomes an undecidable problem.) So the compiler cannot choose between those two types, and must be helped.
However, the type can be completely omitted in the class definition if it is already known, through inheritance or type constraints on self. Here is an example of method overriding.
The following idiom separates description and definition.
Note here the (self : int #iterator) idiom, which ensures that this object implements the interface iterator.
Polymorphic methods are called in exactly the same way as normal methods, but you should be aware of some limitations of type inference. Namely, a polymorphic method can only be called if its type is known at the call site. Otherwise, the method will be assumed to be monomorphic, and given an incompatible type.
The workaround is easy: you should put a type constraint on the parameter.
Of course the constraint may also be an explicit method type. Only occurrences of quantified variables are required.
Another use of polymorphic methods is to allow some form of implicit subtyping in method arguments. We have already seen in sectionââ3.8 how some functions may be polymorphic in the class of their argument. This can be extended to methods.
Note here the special syntax (#point0 as 'a) we have to use to quantify the extensible part of #point0. As for the variable binder, it can be omitted in class specifications. If you want polymorphism inside object field it must be quantified independently.
In method m1, o must be an object with at least a method n1, itself polymorphic. In method m2, the argument of n2 and x must have the same type, which is quantified at the same level as 'a.
Subtyping is never implicit. There are, however, two ways to perform subtyping. The most general construction is fully explicit: both the domain and the codomain of the type coercion must be given.
We have seen that points and colored points have incompatible types. For instance, they cannot be mixed in the same list. However, a colored point can be coerced to a point, hiding its color method:
An object of type t can be seen as an object of type t' only if t is a subtype of t'. For instance, a point cannot be seen as a colored point.
Indeed, narrowing coercions without runtime checks would be unsafe. Runtime type checks might raise exceptions, and they would require the presence of type information at runtime, which is not the case in the OCaml system. For these reasons, there is no such operation available in the language.
Be aware that subtyping and inheritance are not related. Inheritance is a syntactic relation between classes while subtyping is a semantic relation between types. For instance, the class of colored points could have been defined directly, without inheriting from the class of points; the type of colored points would remain unchanged and thus still be a subtype of points.
The domain of a coercion can often be omitted. For instance, one can define:
In this case, the function colored_point_to_point is an instance of the function to_point. This is not always true, however. The fully explicit coercion is more precise and is sometimes unavoidable. Consider, for example, the following class:
The object type c0 is an abbreviation for <m : 'a; n : int> as 'a. Consider now the type declaration:
The object type c1 is an abbreviation for the type <m : 'a> as 'a. The coercion from an object of type c0 to an object of type c1 is correct:
However, the domain of the coercion cannot always be omitted. In that case, the solution is to use the explicit form. Sometimes, a change in the class-type definition can also solve the problem
While class types c1 and c2 are different, both object types c1 and c2 expand to the same object type (same method names and types). Yet, when the domain of a coercion is left implicit and its co-domain is an abbreviation of a known class type, then the class type, rather than the object type, is used to derive the coercion function. This allows leaving the domain implicit in most cases when coercing from a subclass to its superclass. The type of a coercion can always be seen as below:
Note the difference between these two coercions: in the case of to_c2, the type #c2 = < m : 'a; .. > as 'a is polymorphically recursive (according to the explicit recursion in the class type of c2); hence the success of applying this coercion to an object of class c0. On the other hand, in the first case, c1 was only expanded and unrolled twice to obtain < m : < m : c1; .. >; .. > (remember #c1 = < m : c1; .. >), without introducing recursion. You may also note that the type of to_c2 is #c2 -> c2 while the type of to_c1 is more general than #c1 -> c1. This is not always true, since there are class types for which some instances of #c are not subtypes of c, as explained in sectionââ3.16. Yet, for parameterless classes the coercion (_ :> c) is always more general than (_ : #c :> c).
A common problem may occur when one tries to define a coercion to a class c while defining class c. The problem is due to the type abbreviation not being completely defined yet, and so its subtypes are not clearly known. Then, a coercion (_ :> c) or (_ : #c :> c) is taken to be the identity function, as in
As a consequence, if the coercion is applied to self, as in the following example, the type of self is unified with the closed type c (a closed object type is an object type without ellipsis). This would constrain the type of self be closed and is thus rejected. Indeed, the type of self cannot be closed: this would prevent any further extension of the class. Therefore, a type error is generated when the unification of this type with another type would result in a closed object type.
However, the most common instance of this problem, coercing self to its current class, is detected as a special case by the type checker, and properly typed.
This allows the following idiom, keeping a list of all objects belonging to a class or its subclasses:
This idiom can in turn be used to retrieve an object whose type has been weakened:
The type < m : int > we see here is just the expansion of c, due to the use of a reference; we have succeeded in getting back an object of type c.
The previous coercion problem can often be avoided by first
defining the abbreviation, using a class type:
It is also possible to use a virtual class. Inheriting from this class simultaneously forces all methods of c to have the same type as the methods of c'.
One could think of defining the type abbreviation directly:
However, the abbreviation #c' cannot be defined directly in a similar way. It can only be defined by a class or a class-type definition. This is because a #-abbreviation carries an implicit anonymous variable .. that cannot be explicitly named. The closer you get to it is:
with an extra type variable capturing the open object type.
It is possible to write a version of class point without assignments on the instance variables. The override construct {< ... >} returns a copy of âselfâ (that is, the current object), possibly changing the value of some instance variables.
As with records, the form {< x >} is an elided version of {< x = x >} which avoids the repetition of the instance variable name. Note that the type abbreviation functional_point is recursive, which can be seen in the class type of functional_point: the type of self is 'a and 'a appears inside the type of the method move.
The above definition of functional_point is not equivalent to the following:
While objects of either class will behave the same, objects of their subclasses will be different. In a subclass of bad_functional_point, the method move will keep returning an object of the parent class. On the contrary, in a subclass of functional_point, the method move will return an object of the subclass.
Functional update is often used in conjunction with binary methods as illustrated in sectionââ8.2.1.
Objects can also be cloned, whether they are functional or imperative. The library function Oo.copy makes a shallow copy of an object. That is, it returns a new object that has the same methods and instance variables as its argument. The instance variables are copied but their contents are shared. Assigning a new value to an instance variable of the copy (using a method call) will not affect instance variables of the original, and conversely. A deeper assignment (for example if the instance variable is a reference cell) will of course affect both the original and the copy.
The type of Oo.copy is the following:
The keyword as in that type binds the type variable 'a to the object type < .. >. Therefore, Oo.copy takes an object with any methods (represented by the ellipsis), and returns an object of the same type. The type of Oo.copy is different from type < .. > -> < .. > as each ellipsis represents a different set of methods. Ellipsis actually behaves as a type variable.
In fact, Oo.copy p will behave as p#copy assuming that a public method copy with body {< >} has been defined in the class of p.
Objects can be compared using the generic comparison functions = and <>. Two objects are equal if and only if they are physically equal. In particular, an object and its copy are not equal.
Other generic comparisons such as (<, <=, ...) can also be used on objects. The relation < defines an unspecified but strict ordering on objects. The ordering relationship between two objects is fixed permanently once the two objects have been created, and it is not affected by mutation of fields.
Cloning and override have a non empty intersection. They are interchangeable when used within an object and without overriding any field:
Only the override can be used to actually override fields, and only the Oo.copy primitive can be used externally.
Cloning can also be used to provide facilities for saving and restoring the state of objects.
The above definition will only backup one level. The backup facility can be added to any class by using multiple inheritance.
We can define a variant of backup that retains all copies. (We also add a method clear to manually erase all copies.)
Recursive classes can be used to define objects whose types are mutually recursive.
Although their types are mutually recursive, the classes widget and window are themselves independent.
A binary method is a method which takes an argument of the same type as self. The class comparable below is a template for classes with a binary method leq of type 'a -> bool where the type variable 'a is bound to the type of self. Therefore, #comparable expands to < leq : 'a -> bool; .. > as 'a. We see here that the binder as also allows writing recursive types.
We then define a subclass money of comparable. The class money simply wraps floats as comparable objects.1 We will extend money below with more operations. We have to use a type constraint on the class parameter x because the primitive <= is a polymorphic function in OCaml. The inherit clause ensures that the type of objects of this class is an instance of #comparable.
Note that the type money is not a subtype of type comparable, as the self type appears in contravariant position in the type of method leq. Indeed, an object m of class money has a method leq that expects an argument of type money since it accesses its value method. Considering m of type comparable would allow a call to method leq on m with an argument that does not have a method value, which would be an error.
Similarly, the type money2 below is not a subtype of type money.
It is however possible to define functions that manipulate objects of type either money or money2: the function min will return the minimum of any two objects whose type unifies with #comparable. The type of min is not the same as #comparable -> #comparable -> #comparable, as the abbreviation #comparable hides a type variable (an ellipsis). Each occurrence of this abbreviation generates a new variable.
This function can be applied to objects of type money or money2.
More examples of binary methods can be found in sectionsââ8.2.1 andââ8.2.3.
Note the use of override for method times. Writing new money2 (k *. repr) instead of {< repr = k *. repr >} would not behave well with inheritance: in a subclass money3 of money2 the times method would return an object of class money2 but not of class money3 as would be expected.
The class money could naturally carry another binary method. Here is a direct definition:
The above class money reveals a problem that often occurs with binary methods. In order to interact with other objects of the same class, the representation of money objects must be revealed, using a method such as value. If we remove all binary methods (here plus and leq), the representation can easily be hidden inside objects by removing the method value as well. However, this is not possible as soon as some binary method requires access to the representation of objects of the same class (other than self).
Here, the representation of the object is known only to a particular object. To make it available to other objects of the same class, we are forced to make it available to the whole world. However we can easily restrict the visibility of the representation using the module system.
Another example of friend functions may be found in sectionââ8.2.3. These examples occur when a group of objects (here objects of the same class) and functions should see each others internal representation, while their representation should be hidden from the outside. The solution is always to define all friends in the same module, give access to the representation and use a signature constraint to make the representation abstract outside the module.
(Chapter written by Jacques Garrigue)
If you have a look at modules ending in Labels in the standard library, you will see that function types have annotations you did not have in the functions you defined yourself.
Such annotations of the form name: are called labels. They are meant to document the code, allow more checking, and give more flexibility to function application. You can give such names to arguments in your programs, by prefixing them with a tilde ~.
When you want to use distinct names for the variable and the label appearing in the type, you can use a naming label of the form ~name:. This also applies when the argument is not a variable.
Labels obey the same rules as other identifiers in OCaml, that is you cannot use a reserved keyword (like in or to) as a label.
Formal parameters and arguments are matched according to their respective labels, the absence of label being interpreted as the empty label. This allows commuting arguments in applications. One can also partially apply a function on any argument, creating a new function of the remaining parameters.
If several arguments of a function bear the same label (or no label), they will not commute among themselves, and order matters. But they can still commute with other arguments.
An interesting feature of labeled arguments is that they can be made optional. For optional parameters, the question mark ? replaces the tilde ~ of non-optional ones, and the label is also prefixed by ? in the function type. Default values may be given for such optional parameters.
A function taking some optional arguments must also take at least one non-optional argument. The criterion for deciding whether an optional argument has been omitted is the non-labeled application of an argument appearing after this optional argument in the function type. Note that if that argument is labeled, you will only be able to eliminate optional arguments by totally applying the function, omitting all optional arguments and omitting all labels for all remaining arguments.
Optional parameters may also commute with non-optional or unlabeled ones, as long as they are applied simultaneously. By nature, optional arguments do not commute with unlabeled arguments applied independently.
Here (test () ()) is already (0,0,0) and cannot be further applied.
Optional arguments are actually implemented as option types. If you do not give a default value, you have access to their internal representation, type 'a option = None | Some of 'a. You can then provide different behaviors when an argument is present or not.
It may also be useful to relay an optional argument from a function call to another. This can be done by prefixing the applied argument with ?. This question mark disables the wrapping of optional argument in an option type.
While they provide an increased comfort for writing function applications, labels and optional arguments have the pitfall that they cannot be inferred as completely as the rest of the language.
You can see it in the following two examples.
The first case is simple: g is passed ~y and then ~x, but f expects ~x and then ~y. This is correctly handled if we know the type of g to be x:int -> y:int -> int in advance, but otherwise this causes the above type clash. The simplest workaround is to apply formal parameters in a standard order.
The second example is more subtle: while we intended the argument bump to be of type ?step:int -> int -> int, it is inferred as step:int -> int -> 'a. These two types being incompatible (internally normal and optional arguments are different), a type error occurs when applying bump_it to the real bump.
We will not try here to explain in detail how type inference works. One must just understand that there is not enough information in the above program to deduce the correct type of g or bump. That is, there is no way to know whether an argument is optional or not, or which is the correct order, by looking only at how a function is applied. The strategy used by the compiler is to assume that there are no optional arguments, and that applications are done in the right order.
The right way to solve this problem for optional parameters is to add a type annotation to the argument bump.
In practice, such problems appear mostly when using objects whose methods have optional arguments, so writing the type of object arguments is often a good idea.
Normally the compiler generates a type error if you attempt to pass to a function a parameter whose type is different from the expected one. However, in the specific case where the expected type is a non-labeled function type, and the argument is a function expecting optional parameters, the compiler will attempt to transform the argument to have it match the expected type, by passing None for all optional parameters.
This transformation is coherent with the intended semantics, including side-effects. That is, if the application of optional parameters shall produce side-effects, these are delayed until the received function is really applied to an argument.
Like for names, choosing labels for functions is not an easy task. A good labeling is one which
We explain here the rules we applied when labeling OCaml libraries.
To speak in an âobject-orientedâ way, one can consider that each function has a main argument, its object, and other arguments related with its action, the parameters. To permit the combination of functions through functionals in commuting label mode, the object will not be labeled. Its role is clear from the function itself. The parameters are labeled with names reminding of their nature or their role. The best labels combine nature and role. When this is not possible the role is to be preferred, since the nature will often be given by the type itself. Obscure abbreviations should be avoided.
ListLabels.map : f:('a -> 'b) -> 'a list -> 'b list
UnixLabels.write : file_descr -> buf:bytes -> pos:int -> len:int -> unit
When there are several objects of same nature and role, they are all left unlabeled.
ListLabels.iter2 : f:('a -> 'b -> unit) -> 'a list -> 'b list -> unit
When there is no preferable object, all arguments are labeled.
BytesLabels.blit : src:bytes -> src_pos:int -> dst:bytes -> dst_pos:int -> len:int -> unit
However, when there is only one argument, it is often left unlabeled.
BytesLabels.create : int -> bytes
This principle also applies to functions of several arguments whose return type is a type variable, as long as the role of each argument is not ambiguous. Labeling such functions may lead to awkward error messages when one attempts to omit labels in an application, as we have seen with ListLabels.fold_left.
Here are some of the label names you will find throughout the libraries.
Label | Meaning |
f: | a function to be applied |
pos: | a position in a string, array or byte sequence |
len: | a length |
buf: | a byte sequence or string used as buffer |
src: | the source of an operation |
dst: | the destination of an operation |
init: | the initial value for an iterator |
cmp: | a comparison function, e.g. Stdlib.compare |
mode: | an operation mode or a flag list |
All these are only suggestions, but keep in mind that the choice of labels is essential for readability. Bizarre choices will make the program harder to maintain.
In the ideal, the right function name with right labels should be enough to understand the functionâs meaning. Since one can get this information with OCamlBrowser or the ocaml toplevel, the documentation is only used when a more detailed specification is needed.
(Chapter written by Jacques Garrigue)
Variants as presented in sectionââ1.4 are a powerful tool to build data structures and algorithms. However they sometimes lack flexibility when used in modular programming. This is due to the fact that every constructor is assigned to a unique type when defined and used. Even if the same name appears in the definition of multiple types, the constructor itself belongs to only one type. Therefore, one cannot decide that a given constructor belongs to multiple types, or consider a value of some type to belong to some other type with more constructors.
With polymorphic variants, this original assumption is removed. That is, a variant tag does not belong to any type in particular, the type system will just check that it is an admissible value according to its use. You need not define a type before using a variant tag. A variant type will be inferred independently for each of its uses.
In programs, polymorphic variants work like usual ones. You just have to prefix their names with a backquote character `.
[>`Off|`On] list means that to match this list, you should at least be able to match `Off and `On, without argument. [<`On|`Off|`Number of int] means that f may be applied to `Off, `On (both without argument), or `Number n where n is an integer. The > and < inside the variant types show that they may still be refined, either by defining more tags or by allowing less. As such, they contain an implicit type variable. Because each of the variant types appears only once in the whole type, their implicit type variables are not shown.
The above variant types were polymorphic, allowing further refinement. When writing type annotations, one will most often describe fixed variant types, that is types that cannot be refined. This is also the case for type abbreviations. Such types do not contain < or >, but just an enumeration of the tags and their associated types, just like in a normal datatype definition.
Type-checking polymorphic variants is a subtle thing, and some expressions may result in more complex type information.
Here we are seeing two phenomena. First, since this matching is open (the last case catches any tag), we obtain the type [> `A | `B] rather than [< `A | `B] in a closed matching. Then, since x is returned as is, input and return types are identical. The notation as 'a denotes such type sharing. If we apply f to yet another tag `E, it gets added to the list.
Here f1 and f2 both accept the variant tags `A and `B, but the argument of `A is int for f1 and string for f2. In fâs type `C, only accepted by f1, disappears, but both argument types appear for `A as int & string. This means that if we pass the variant tag `A to f, its argument should be both int and string. Since there is no such value, f cannot be applied to `A, and `B is the only accepted input.
Even if a value has a fixed variant type, one can still give it a larger type through coercions. Coercions are normally written with both the source type and the destination type, but in simple cases the source type may be omitted.
You may also selectively coerce values through pattern matching.
When an or-pattern composed of variant tags is wrapped inside an alias-pattern, the alias is given a type containing only the tags enumerated in the or-pattern. This allows for many useful idioms, like incremental definition of functions.
To make this even more comfortable, you may use type definitions as abbreviations for or-patterns. That is, if you have defined type myvariant = [`Tag1 of int | `Tag2 of bool], then the pattern #myvariant is equivalent to writing (`Tag1(_ : int) | `Tag2(_ : bool)).
Such abbreviations may be used alone,
or combined with with aliases.
After seeing the power of polymorphic variants, one may wonder why they were added to core language variants, rather than replacing them.
The answer is twofold. The first aspect is that while being pretty efficient, the lack of static type information allows for less optimizations, and makes polymorphic variants slightly heavier than core language ones. However noticeable differences would only appear on huge data structures.
More important is the fact that polymorphic variants, while being type-safe, result in a weaker type discipline. That is, core language variants do actually much more than ensuring type-safety, they also check that you use only declared constructors, that all constructors present in a data-structure are compatible, and they enforce typing constraints to their parameters.
For this reason, you must be more careful about making types explicit when you use polymorphic variants. When you write a library, this is easy since you can describe exact types in interfaces, but for simple programs you are probably better off with core language variants.
Beware also that some idioms make trivial errors very hard to find. For instance, the following code is probably wrong but the compiler has no way to see it.
You can avoid such risks by annotating the definition itself.
This chapter covers more advanced questions related to the limitations of polymorphic functions and types. There are some situations in OCaml where the type inferred by the type checker may be less generic than expected. Such non-genericity can stem either from interactions between side-effects and typing or the difficulties of implicit polymorphic recursion and higher-rank polymorphism.
This chapter details each of these situations and, if it is possible, how to recover genericity.
Maybe the most frequent examples of non-genericity derive from the interactions between polymorphic types and mutation. A simple example appears when typing the following expression
Since the type of None is 'a option and the function ref has type 'b -> 'b ref, a natural deduction for the type of store would be 'a option ref. However, the inferred type, '_weak1 option ref, is different. Type variables whose names start with a _weak prefix like '_weak1 are weakly polymorphic type variables, sometimes shortened to âweak type variablesâ. A weak type variable is a placeholder for a single type that is currently unknown. Once the specific type t behind the placeholder type '_weak1 is known, all occurrences of '_weak1 will be replaced by t. For instance, we can define another option reference and store an int inside:
After storing an int inside another_store, the type of another_store has been updated from '_weak2 option ref to int option ref. This distinction between weakly and generic polymorphic type variable protects OCaml programs from unsoundness and runtime errors. To understand from where unsoundness might come, consider this simple function which swaps a value x with the value stored inside a store reference, if there is such value:
We can apply this function to our store
After these three swaps the stored value is 3. Everything is fine up to now. We can then try to swap 3 with a more interesting value, for instance a function:
At this point, the type checker rightfully complains that it is not possible to swap an integer and a function, and that an int should always be traded for another int. Furthermore, the type checker prevents us from manually changing the type of the value stored by store:
Indeed, looking at the type of store, we see that the weak type '_weak1 has been replaced by the type int
Therefore, after placing an int in store, we cannot use it to store any value other than an int. More generally, weak types protect the program from undue mutation of values with a polymorphic type.
Moreover, weak types cannot appear in the signature of toplevel modules: types must be known at compilation time. Otherwise, different compilation units could replace the weak type with different and incompatible types. For this reason, compiling the following small piece of code
let option_ref = ref None
yields a compilation error
Error: The type of this expression, '_weak1 option ref, contains type variables that cannot be generalized
To solve this error, it is enough to add an explicit type annotation to specify the type at declaration time:
let option_ref: int option ref = ref None
This is in any case a good practice for such global mutable variables. Otherwise, they will pick out the type of first use. If there is a mistake at this point, it can result in confusing type errors when later, correct uses are flagged as errors.
Identifying the exact context in which polymorphic types should be replaced by weak types in a modular way is a difficult question. Indeed the type system must handle the possibility that functions may hide persistent mutable states. For instance, the following function uses an internal reference to implement a delayed identity function
It would be unsound to apply this fake_id function to values with different types. The function fake_id is therefore rightfully assigned the type '_weak3 -> '_weak3 rather than 'a -> 'a. At the same time, it ought to be possible to use a local mutable state without impacting the type of a function.
To circumvent these dual difficulties, the type checker considers that any value returned by a function might rely on persistent mutable states behind the scene and should be given a weak type. This restriction on the type of mutable values and the results of function application is called the value restriction. Note that this value restriction is conservative: there are situations where the value restriction is too cautious and gives a weak type to a value that could be safely generalized to a polymorphic type:
Quite often, this happens when defining functions using higher order functions. To avoid this problem, a solution is to add an explicit argument to the function:
With this argument, id_again is seen as a function definition by the type checker and can therefore be generalized. This kind of manipulation is called eta-expansion in lambda calculus and is sometimes referred under this name.
There is another partial solution to the problem of unnecessary weak types, which is implemented directly within the type checker. Briefly, it is possible to prove that weak types that only appear as type parameters in covariant positions âalso called positive positionsâ can be safely generalized to polymorphic types. For instance, the type 'a list is covariant in 'a:
Note that the type inferred for empty is 'a list and not the '_weak5 list that should have occurred with the value restriction.
The value restriction combined with this generalization for covariant type parameters is called the relaxed value restriction.
Variance describes how type constructors behave with respect to subtyping. Consider for instance a pair of type x and xy with x a subtype of xy, denoted x :> xy:
As x is a subtype of xy, we can convert a value of type x to a value of type xy:
Similarly, if we have a value of type x list, we can convert it to a value of type xy list, since we could convert each element one by one:
In other words, x :> xy implies that x list :> xy list, therefore the type constructor 'a list is covariant (it preserves subtyping) in its parameter 'a.
Contrarily, if we have a function that can handle values of type xy
it can also handle values of type x:
Note that we can rewrite the type of f and f' as
In this case, we have x :> xy implies xy proc :> x proc. Notice that the second subtyping relation reverse the order of x and xy: the type constructor 'a proc is contravariant in its parameter 'a. More generally, the function type constructor 'a -> 'b is covariant in its return type 'b and contravariant in its argument type 'a.
A type constructor can also be invariant in some of its type parameters, neither covariant nor contravariant. A typical example is a reference:
If we were able to coerce x to the type xy ref as a variable xy, we could use xy to store the value `Y inside the reference and then use the x value to read this content as a value of type x, which would break the type system.
More generally, as soon as a type variable appears in a position describing mutable state it becomes invariant. As a corollary, covariant variables will never denote mutable locations and can be safely generalized. For a better description, interested readers can consult the original article by Jacques Garrigue on http://www.math.nagoya-u.ac.jp/~garrigue/papers/morepoly-long.pdf
Together, the relaxed value restriction and type parameter covariance help to avoid eta-expansion in many situations.
Moreover, when the type definitions are exposed, the type checker is able to infer variance information on its own and one can benefit from the relaxed value restriction even unknowingly. However, this is not the case anymore when defining new abstract types. As an illustration, we can define a module type collection as:
In this situation, when coercing the module List2 to the module type COLLECTION, the type checker forgets that 'a List2.t was covariant in 'a. Consequently, the relaxed value restriction does not apply anymore:
To keep the relaxed value restriction, we need to declare the abstract type 'a COLLECTION.t as covariant in 'a:
We then recover polymorphism:
The second major class of non-genericity is directly related to the problem of type inference for polymorphic functions. In some circumstances, the type inferred by OCaml might be not general enough to allow the definition of some recursive functions, in particular for recursive functions acting on non-regular algebraic data types.
With a regular polymorphic algebraic data type, the type parameters of the type constructor are constant within the definition of the type. For instance, we can look at arbitrarily nested list defined as:
Note that the type constructor regular_nested always appears as 'a regular_nested in the definition above, with the same parameter 'a. Equipped with this type, one can compute a maximal depth with a classic recursive function
Non-regular recursive algebraic data types correspond to polymorphic algebraic data types whose parameter types vary between the left and right side of the type definition. For instance, it might be interesting to define a datatype that ensures that all lists are nested at the same depth:
Intuitively, a value of type 'a nested is a list of list âŠof list of elements a with k nested list. We can then adapt the maximal_depth function defined on regular_depth into a depth function that computes this k. As a first try, we may define
The type error here comes from the fact that during the definition of depth, the type checker first assigns to depth the type 'a -> 'b . When typing the pattern matching, 'a -> 'b becomes 'a nested -> 'b, then 'a nested -> int once the List branch is typed. However, when typing the application depth n in the Nested branch, the type checker encounters a problem: depth n is applied to 'a list nested, it must therefore have the type 'a list nested -> 'b. Unifying this constraint with the previous one leads to the impossible constraint 'a list nested = 'a nested. In other words, within its definition, the recursive function depth is applied to values of type 'a t with different types 'a due to the non-regularity of the type constructor nested. This creates a problem because the type checker had introduced a new type variable 'a only at the definition of the function depth whereas, here, we need a different type variable for every application of the function depth.
The solution of this conundrum is to use an explicitly polymorphic type annotation for the type 'a:
In the type of depth, 'a.'a nested -> int, the type variable 'a is universally quantified. In other words, 'a.'a nested -> int reads as âfor all type 'a, depth maps 'a nested values to integersâ. Whereas the standard type 'a nested -> int can be interpreted as âlet be a type variable 'a, then depth maps 'a nested values to integersâ. There are two major differences with these two type expressions. First, the explicit polymorphic annotation indicates to the type checker that it needs to introduce a new type variable every time the function depth is applied. This solves our problem with the definition of the function depth.
Second, it also notifies the type checker that the type of the function should be polymorphic. Indeed, without explicit polymorphic type annotation, the following type annotation is perfectly valid
since 'a,'b and 'c denote type variables that may or may not be polymorphic. Whereas, it is an error to unify an explicitly polymorphic type with a non-polymorphic type:
An important remark here is that it is not needed to explicit fully the type of depth: it is sufficient to add annotations only for the universally quantified type variables:
With explicit polymorphic annotations, it becomes possible to implement any recursive function that depends only on the structure of the nested lists and not on the type of the elements. For instance, a more complex example would be to compute the total number of elements of the nested lists:
Similarly, it may be necessary to use more than one explicitly polymorphic type variables, like for computing the nested list of list lengths of the nested list:
Explicit polymorphic annotations are however not sufficient to cover all the cases where the inferred type of a function is less general than expected. A similar problem arises when using polymorphic functions as arguments of higher-order functions. For instance, we may want to compute the average depth or length of two nested lists:
It would be natural to factorize these two definitions as:
However, the type of average len is less generic than the type of average_len, since it requires the type of the first and second argument to be the same:
As previously with polymorphic recursion, the problem stems from the fact that type variables are introduced only at the start of the let definitions. When we compute both f x and f y, the type of x and y are unified together. To avoid this unification, we need to indicate to the type checker that f is polymorphic in its first argument. In some sense, we would want average to have type
val average: ('a. 'a nested -> int) -> 'a nested -> 'b nested -> int
Note that this syntax is not valid within OCaml: average has an universally quantified type 'a inside the type of one of its argument whereas for polymorphic recursion the universally quantified type was introduced before the rest of the type. This position of the universally quantified type means that average is a second-rank polymorphic function. This kind of higher-rank functions is not directly supported by OCaml: type inference for second-rank polymorphic function and beyond is undecidable; therefore using this kind of higher-rank functions requires to handle manually these universally quantified types.
In OCaml, there are two ways to introduce this kind of explicit universally quantified types: universally quantified record fields,
and universally quantified object methods:
To solve our problem, we can therefore use either the record solution:
or the object one:
Generalized algebraic datatypes, or GADTs, extend usual sum types in two ways: constraints on type parameters may change depending on the value constructor, and some type variables may be existentially quantified. Adding constraints is done by giving an explicit return type, where type parameters are instantiated:
This return type must use the same type constructor as the type being defined, and have the same number of parameters. Variables are made existential when they appear inside a constructorâs argument, but not in its return type. Since the use of a return type often eliminates the need to name type parameters in the left-hand side of a type definition, one can replace them with anonymous types _ in that case.
The constraints associated to each constructor can be recovered through pattern-matching. Namely, if the type of the scrutinee of a pattern-matching contains a locally abstract type, this type can be refined according to the constructor used. These extra constraints are only valid inside the corresponding branch of the pattern-matching. If a constructor has some existential variables, fresh locally abstract types are generated, and they must not escape the scope of this branch.
We write an eval function:
And use it:
It is important to remark that the function eval is using the polymorphic syntax for locally abstract types. When defining a recursive function that manipulates a GADT, explicit polymorphic recursion should generally be used. For instance, the following definition fails with a type error:
In absence of an explicit polymorphic annotation, a monomorphic type is inferred for the recursive function. If a recursive call occurs inside the function definition at a type that involves an existential GADT type variable, this variable flows to the type of the recursive function, and thus escapes its scope. In the above example, this happens in the branch App(f,x) when eval is called with f as an argument. In this branch, the type of f is ($App_'b -> a) term. The prefix $ in $App_'b denotes an existential type named by the compiler (seeââ7.5). Since the type of eval is 'a term -> 'a, the call eval f makes the existential type $App_'b flow to the type variable 'a and escape its scope. This triggers the above error.
Type inference for GADTs is notoriously hard. This is due to the fact some types may become ambiguous when escaping from a branch. For instance, in the Int case above, n could have either type int or a, and they are not equivalent outside of that branch. As a first approximation, type inference will always work if a pattern-matching is annotated with types containing no free type variables (both on the scrutinee and the return type). This is the case in the above example, thanks to the type annotation containing only locally abstract types.
In practice, type inference is a bit more clever than that: type annotations do not need to be immediately on the pattern-matching, and the types do not have to be always closed. As a result, it is usually enough to only annotate functions, as in the example above. Type annotations are propagated in two ways: for the scrutinee, they follow the flow of type inference, in a way similar to polymorphic methods; for the return type, they follow the structure of the program, they are split on functions, propagated to all branches of a pattern matching, and go through tuples, records, and sum types. Moreover, the notion of ambiguity used is stronger: a type is only seen as ambiguous if it was mixed with incompatible types (equated by constraints), without type annotations between them. For instance, the following program types correctly.
Here the return type int is never mixed with a, so it is seen as non-ambiguous, and can be inferred. When using such partial type annotations we strongly suggest specifying the -principal mode, to check that inference is principal.
The exhaustiveness check is aware of GADT constraints, and can automatically infer that some cases cannot happen. For instance, the following pattern matching is correctly seen as exhaustive (the Add case cannot happen).
Usually, the exhaustiveness check only tries to check whether the cases omitted from the pattern matching are typable or not. However, you can force it to try harder by adding refutation cases, written as a full stop. In the presence of a refutation case, the exhaustiveness check will first compute the intersection of the pattern with the complement of the cases preceding it. It then checks whether the resulting patterns can really match any concrete values by trying to type-check them. Wild cards in the generated patterns are handled in a special way: if their type is a variant type with only GADT constructors, then the pattern is split into the different constructors, in order to check whether any of them is possible (this splitting is not done for arguments of these constructors, to avoid non-termination). We also split tuples and variant types with only one case, since they may contain GADTs inside. For instance, the following code is deemed exhaustive:
Namely, the inferred remaining case is Some _, which is split into Some (Int, _) and Some (Bool, _), which are both untypable because deep expects a non-existing char t as the first element of the tuple. Note that the refutation case could be omitted here, because it is automatically added when there is only one case in the pattern matching.
Another addition is that the redundancy check is now aware of GADTs: a case will be detected as redundant if it could be replaced by a refutation case using the same pattern.
The term type we have defined above is an indexed type, where a type parameter reflects a property of the value contents. Another use of GADTs is singleton types, where a GADT value represents exactly one type. This value can be used as runtime representation for this type, and a function receiving it can have a polytypic behavior.
Here is an example of a polymorphic function that takes the runtime representation of some type t and a value of the same type, then pretty-prints the value as a string:
Another frequent application of GADTs is equality witnesses.
Here type eq has only one constructor, and by matching on it one adds a local constraint allowing the conversion between a and b. By building such equality witnesses, one can make equal types which are syntactically different.
Here is an example using both singleton types and equality witnesses to implement dynamic types.
The typing of pattern matching in the presence of GADTs can generate many existential types. When necessary, error messages refer to these existential types using compiler-generated names. Currently, the compiler generates these names according to the following nomenclature:
As shown by the last item, the current behavior is imperfect and may be improved in future versions.
As explained above, pattern-matching on a GADT constructor may introduce existential types. Syntax has been introduced which allows them to be named explicitly. For instance, the following code names the type of the argument of f and uses this name.
All existential type variables of the constructor must by introduced by the (type ...) construct and bound by a type annotation on the outside of the constructor argument.
GADT pattern-matching may also add type equations to non-local abstract types. The behaviour is the same as with local abstract types. Reusing the above eq type, one can write:
Of course, not all abstract types can be refined, as this would contradict the exhaustiveness check. Namely, builtin types (those defined by the compiler itself, such as int or array), and abstract types defined by the local module, are non-instantiable, and as such cause a type error rather than introduce an equation.
(Chapter written by Didier RĂ©my)
In this chapter, we show some larger examples using objects, classes and modules. We review many of the object features simultaneously on the example of a bank account. We show how modules taken from the standard library can be expressed as classes. Lastly, we describe a programming pattern known as virtual types through the example of window managers.
In this section, we illustrate most aspects of Object and inheritance by refining, debugging, and specializing the following initial naive definition of a simple bank account. (We reuse the module Euro defined at the end of chapterââ3.)
We now refine this definition with a method to compute interest.
We make the method interest private, since clearly it should not be called freely from the outside. Here, it is only made accessible to subclasses that will manage monthly or yearly updates of the account.
We should soon fix a bug in the current definition: the deposit method can be used for withdrawing money by depositing negative amounts. We can fix this directly:
However, the bug might be fixed more safely by the following definition:
In particular, this does not require the knowledge of the implementation of the method deposit.
To keep track of operations, we extend the class with a mutable field history and a private method trace to add an operation in the log. Then each method to be traced is redefined.
One may wish to open an account and simultaneously deposit some initial amount. Although the initial implementation did not address this requirement, it can be achieved by using an initializer.
A better alternative is:
Indeed, the latter is safer since the call to deposit will automatically benefit from safety checks and from the trace. Letâs test it:
Closing an account can be done with the following polymorphic function:
Of course, this applies to all sorts of accounts.
Finally, we gather several versions of the account into a module Account abstracted over some currency.
This shows the use of modules to group several class definitions that can in fact be thought of as a single unit. This unit would be provided by a bank for both internal and external uses. This is implemented as a functor that abstracts over the currency so that the same code can be used to provide accounts in different currencies.
The class bank is the real implementation of the bank account (it could have been inlined). This is the one that will be used for further extensions, refinements, etc. Conversely, the client will only be given the client view.
Hence, the clients do not have direct access to the balance, nor the history of their own accounts. Their only way to change their balance is to deposit or withdraw money. It is important to give the clients a class and not just the ability to create accounts (such as the promotional discount account), so that they can personalize their account. For instance, a client may refine the deposit and withdraw methods so as to do his own financial bookkeeping, automatically. On the other hand, the function discount is given as such, with no possibility for further personalization.
It is important to provide the clientâs view as a functor Client so that client accounts can still be built after a possible specialization of the bank. The functor Client may remain unchanged and be passed the new definition to initialize a clientâs view of the extended account.
The functor Client may also be redefined when some new features of the account can be given to the client.
One may wonder whether it is possible to treat primitive types such as integers and strings as objects. Although this is usually uninteresting for integers or strings, there may be some situations where this is desirable. The class money above is such an example. We show here how to do it for strings.
A naive definition of strings as objects could be:
However, the method escaped returns an object of the class ostring, and not an object of the current class. Hence, if the class is further extended, the method escaped will only return an object of the parent class.
As seen in sectionââ3.16, the solution is to use functional update instead. We need to create an instance variable containing the representation s of the string.
As shown in the inferred type, the methods escaped and sub now return objects of the same type as the one of the class.
Another difficulty is the implementation of the method concat. In order to concatenate a string with another string of the same class, one must be able to access the instance variable externally. Thus, a method repr returning s must be defined. Here is the correct definition of strings:
Another constructor of the class string can be defined to return a new string of a given length:
Here, exposing the representation of strings is probably harmless. We do could also hide the representation of strings as we hid the currency in the class money of sectionââ3.17.
There is sometimes an alternative between using modules or classes for parametric data types. Indeed, there are situations when the two approaches are quite similar. For instance, a stack can be straightforwardly implemented as a class:
However, writing a method for iterating over a stack is more problematic. A method fold would have type ('b -> 'a -> 'b) -> 'b -> 'b. Here 'a is the parameter of the stack. The parameter 'b is not related to the class 'a stack but to the argument that will be passed to the method fold. A naive approach is to make 'b an extra parameter of class stack:
However, the method fold of a given object can only be applied to functions that all have the same type:
A better solution is to use polymorphic methods, which were introduced in OCaml version 3.05. Polymorphic methods makes it possible to treat the type variable 'b in the type of fold as universally quantified, giving fold the polymorphic type Forall 'b. ('b -> 'a -> 'b) -> 'b -> 'b. An explicit type declaration on the method fold is required, since the type checker cannot infer the polymorphic type by itself.
A simplified version of object-oriented hash tables should have the following class type.
A simple implementation, which is quite reasonable for small hash tables is to use an association list:
A better implementation, and one that scales up better, is to use a true hash table⊠whose elements are small hash tables!
Implementing sets leads to another difficulty. Indeed, the method union needs to be able to access the internal representation of another object of the same class.
This is another instance of friend functions as seen in sectionââ3.17. Indeed, this is the same mechanism used in the module Set in the absence of objects.
In the object-oriented version of sets, we only need to add an additional method tag to return the representation of a set. Since sets are parametric in the type of elements, the method tag has a parametric type 'a tag, concrete within the module definition but abstract in its signature. From outside, it will then be guaranteed that two objects with a method tag of the same type will share the same representation.
The following example, known as the subject/observer pattern, is often presented in the literature as a difficult inheritance problem with inter-connected classes. The general pattern amounts to the definition a pair of two classes that recursively interact with one another.
The class observer has a distinguished method notify that requires two arguments, a subject and an event to execute an action.
The class subject remembers a list of observers in an instance variable, and has a distinguished method notify_observers to broadcast the message notify to all observers with a particular event e.
The difficulty usually lies in defining instances of the pattern above by inheritance. This can be done in a natural and obvious manner in OCaml, as shown on the following example manipulating windows.
As can be expected, the type of window is recursive.
However, the two classes of window_subject and window_observer are not mutually recursive.
Classes window_observer and window_subject can still be extended by inheritance. For instance, one may enrich the subject with new behaviors and refine the behavior of the observer.
We can also create a different kind of observer:
and attach several observers to the same object:
In this chapter, we shall look at the parallel programming facilities in OCaml. The OCaml standard library exposes low-level primitives for parallel programming. We recommend the users to utilise higher-level parallel programming libraries such as domainslib. This tutorial will first cover the high-level parallel programming using domainslib followed by low-level primitives exposed by the compiler.
OCaml distinguishes concurrency and parallelism and provides distinct mechanisms for expressing them. Concurrency is overlapped execution of tasks (section 12.24.2) whereas parallelism is simultaneous execution of tasks. In particular, parallel tasks overlap in time but concurrent tasks may or may not overlap in time. Tasks may execute concurrently by yielding control to each other. While concurrency is a program structuring mechanism, parallelism is a mechanism to make your programs run faster. If you are interested in the concurrent programming mechanisms in OCaml, please refer to the section 12.24 on effect handlers and the chapter 33 on the threads library.
Domains are the units of parallelism in OCaml. The module Domain provides the primitives to create and manage domains. New domains can be spawned using the spawn function.
The spawn function executes the given computation in parallel with the calling domain.
Domains are heavy-weight entities. Each domain maps 1:1 to an operating system thread. Each domain also has its own runtime state, which includes domain-local structures for allocating memory. Hence, they are relatively expensive to create and tear down.
It is recommended that the programs do not spawn more domains than cores available.
In this tutorial, we shall be implementing, running and measuring the performance of parallel programs. The results observed are dependent on the number of cores available on the target machine. This tutorial is being written on a 2.3 GHz Quad-Core Intel Core i7 MacBook Pro with 4 cores and 8 hardware threads. It is reasonable to expect roughly 4x performance on 4 domains for parallel programs with little coordination between the domains, and when the machine is not under load. Beyond 4 domains, the speedup is likely to be less than linear. We shall also use the command-line benchmarking tool hyperfine for benchmarking our programs.
We shall use the program to compute the nth Fibonacci number using recursion as a running example. The sequential program for computing the nth Fibonacci number is given below.
The program can be compiled and benchmarked as follows.
$ ocamlopt -o fib.exe fib.ml $ ./fib.exe 42 fib(42) = 433494437 $ hyperfine './fib.exe 42' # Benchmarking Benchmark 1: ./fib.exe 42 Time (mean ± sd): 1.193 s ± 0.006 s [User: 1.186 s, System: 0.003 s] Range (min ⊠max): 1.181 s ⊠1.202 s 10 runs
We see that it takes around 1.2 seconds to compute the 42nd Fibonacci number.
Spawned domains can be joined using the join function to get their results. The join function waits for target domain to terminate. The following program computes the nth Fibonacci number twice in parallel.
The program spawns two domains which compute the nth Fibonacci number. The spawn function returns a Domain.t value which can be joined to get the result of the parallel computation. The join function blocks until the computation runs to completion.
$ ocamlopt -o fib_twice.exe fib_twice.ml $ ./fib_twice.exe 42 fib(42) = 433494437 fib(42) = 433494437 $ hyperfine './fib_twice.exe 42' Benchmark 1: ./fib_twice.exe 42 Time (mean ± sd): 1.249 s ± 0.025 s [User: 2.451 s, System: 0.012 s] Range (min ⊠max): 1.221 s ⊠1.290 s 10 runs
As one can see that computing the nth Fibonacci number twice almost took the same time as computing it once thanks to parallelism.
Let us attempt to parallelise the Fibonacci function. The two recursive calls may be executed in parallel. However, naively parallelising the recursive calls by spawning domains for each one will not work as it spawns too many domains.
$ ocamlopt -o fib_par1.exe fib_par1.ml $ ./fib_par1.exe 42 Fatal error: exception Failure("failed to allocate domain")
OCaml has a limit of 128 domains that can be active at the same time. An attempt to spawn more domains will raise an exception. How then can we parallelise the Fibonacci function?
The OCaml standard library provides only low-level primitives for concurrent and parallel programming, leaving high-level programming libraries to be developed and distributed outside the core compiler distribution. Domainslib is such a library for nested-parallel programming, which is epitomised by the parallelism available in the recursive Fibonacci computation. Let us use domainslib to parallelise the recursive Fibonacci program. It is recommended that you install domainslib using the opam package manager. This tutorial uses domainslib version 0.4.2.
Domainslib provides an async/await mechanism for spawning parallel tasks and awaiting their results. On top of this mechanism, domainslib provides parallel iterators. At its core, domainslib has an efficient implementation of work-stealing queue in order to efficiently share tasks with other domains. A parallel implementation of the Fibonacci program is given below.
(* fib_par2.ml *) let num_domains = int_of_string Sys.argv.(1) let n = int_of_string Sys.argv.(2) let rec fib n = if n < 2 then 1 else fib (n - 1) + fib (n - 2) module T = Domainslib.Task let rec fib_par pool n = if n > 20 then begin let a = T.async pool (fun _ -> fib_par pool (n-1)) in let b = T.async pool (fun _ -> fib_par pool (n-2)) in T.await pool a + T.await pool b end else fib n let main () = let pool = T.setup_pool ~num_additional_domains:(num_domains - 1) () in let res = T.run pool (fun _ -> fib_par pool n) in T.teardown_pool pool; Printf.printf "fib(%d) = %d\n" n res let _ = main ()
The program takes the number of domains and the input to the Fibonacci function as the first and the second command-line arguments respectively.
Let us start with the main function. First, we set up a pool of domains on which the nested parallel tasks will run. The domain invoking the run function will also participate in executing the tasks submitted to the pool. We invoke the parallel Fibonacci function fib_par in the run function. Finally, we tear down the pool and print the result.
For sufficiently large inputs (n > 20), the fib_par function spawns the left and the right recursive calls asynchronously in the pool using the async function. The async function returns a promise for the result. The result of an asynchronous computation is obtained by awaiting the promise using the await function. The await function call blocks until the promise is resolved.
For small inputs, the fib_par function simply calls the sequential Fibonacci function fib. It is important to switch to sequential mode for small problem sizes. If not, the cost of parallelisation will outweigh the work available.
For simplicity, we use ocamlfind to compile this program. It is recommended that the users use dune to build their programs that utilise libraries installed through opam.
$ ocamlfind ocamlopt -package domainslib -linkpkg -o fib_par2.exe fib_par2.ml $ ./fib_par2.exe 1 42 fib(42) = 433494437 $ hyperfine './fib.exe 42' './fib_par2.exe 2 42' \ './fib_par2.exe 4 42' './fib_par2.exe 8 42' Benchmark 1: ./fib.exe 42 Time (mean ± sd): 1.217 s ± 0.018 s [User: 1.203 s, System: 0.004 s] Range (min ⊠max): 1.202 s ⊠1.261 s 10 runs Benchmark 2: ./fib_par2.exe 2 42 Time (mean ± sd): 628.2 ms ± 2.9 ms [User: 1243.1 ms, System: 4.9 ms] Range (min ⊠max): 625.7 ms ⊠634.5 ms 10 runs Benchmark 3: ./fib_par2.exe 4 42 Time (mean ± sd): 337.6 ms ± 23.4 ms [User: 1321.8 ms, System: 8.4 ms] Range (min ⊠max): 318.5 ms ⊠377.6 ms 10 runs Benchmark 4: ./fib_par2.exe 8 42 Time (mean ± sd): 250.0 ms ± 9.4 ms [User: 1877.1 ms, System: 12.6 ms] Range (min ⊠max): 242.5 ms ⊠277.3 ms 11 runs Summary './fib_par2.exe 8 42' ran 1.35 ± 0.11 times faster than './fib_par2.exe 4 42' 2.51 ± 0.10 times faster than './fib_par2.exe 2 42' 4.87 ± 0.20 times faster than './fib.exe 42'
The results show that, with 8 domains, the parallel Fibonacci program runs 4.87 times faster than the sequential version.
Many numerical algorithms use for-loops. The parallel-for primitive provides a straight-forward way to parallelise such code. Let us take the spectral-norm benchmark from the computer language benchmarks game and parallelise it. The sequential version of the program is given below.
Observe that the program has nested loops in eval_A_times_u and eval_At_times_u. Each iteration of the outer loop body reads from u but writes to disjoint memory locations in v. Hence, the iterations of the outer loop are not dependent on each other and can be executed in parallel.
The parallel version of spectral norm is shown below.
(* spectralnorm_par.ml *) let num_domains = try int_of_string Sys.argv.(1) with _ -> 1 let n = try int_of_string Sys.argv.(2) with _ -> 32 let eval_A i j = 1. /. float((i+j)*(i+j+1)/2+i+1) module T = Domainslib.Task let eval_A_times_u pool u v = let n = Array.length v - 1 in T.parallel_for pool ~start:0 ~finish:n ~body:(fun i -> let vi = ref 0. in for j = 0 to n do vi := !vi +. eval_A i j *. u.(j) done; v.(i) <- !vi ) let eval_At_times_u pool u v = let n = Array.length v - 1 in T.parallel_for pool ~start:0 ~finish:n ~body:(fun i -> let vi = ref 0. in for j = 0 to n do vi := !vi +. eval_A j i *. u.(j) done; v.(i) <- !vi ) let eval_AtA_times_u pool u v = let w = Array.make (Array.length u) 0.0 in eval_A_times_u pool u w; eval_At_times_u pool w v let () = let pool = T.setup_pool ~num_additional_domains:(num_domains - 1) () in let u = Array.make n 1.0 and v = Array.make n 0.0 in T.run pool (fun _ -> for _i = 0 to 9 do eval_AtA_times_u pool u v; eval_AtA_times_u pool v u done); let vv = ref 0.0 and vBv = ref 0.0 in for i=0 to n-1 do vv := !vv +. v.(i) *. v.(i); vBv := !vBv +. u.(i) *. v.(i) done; T.teardown_pool pool; Printf.printf "%0.9f\n" (sqrt(!vBv /. !vv))
Observe that the parallel_for function is isomorphic to the for-loop in the sequential version. No other change is required except for the boiler plate code to set up and tear down the pools.
$ ocamlopt -o spectralnorm.exe spectralnorm.ml $ ocamlfind ocamlopt -package domainslib -linkpkg -o spectralnorm_par.exe \ spectralnorm_par.ml $ hyperfine './spectralnorm.exe 4096' './spectralnorm_par.exe 2 4096' \ './spectralnorm_par.exe 4 4096' './spectralnorm_par.exe 8 4096' Benchmark 1: ./spectralnorm.exe 4096 Time (mean ± sd): 1.989 s ± 0.013 s [User: 1.972 s, System: 0.007 s] Range (min ⊠max): 1.975 s ⊠2.018 s 10 runs Benchmark 2: ./spectralnorm_par.exe 2 4096 Time (mean ± sd): 1.083 s ± 0.015 s [User: 2.140 s, System: 0.009 s] Range (min ⊠max): 1.064 s ⊠1.102 s 10 runs Benchmark 3: ./spectralnorm_par.exe 4 4096 Time (mean ± sd): 698.7 ms ± 10.3 ms [User: 2730.8 ms, System: 18.3 ms] Range (min ⊠max): 680.9 ms ⊠721.7 ms 10 runs Benchmark 4: ./spectralnorm_par.exe 8 4096 Time (mean ± sd): 921.8 ms ± 52.1 ms [User: 6711.6 ms, System: 51.0 ms] Range (min ⊠max): 838.6 ms ⊠989.2 ms 10 runs Summary './spectralnorm_par.exe 4 4096' ran 1.32 ± 0.08 times faster than './spectralnorm_par.exe 8 4096' 1.55 ± 0.03 times faster than './spectralnorm_par.exe 2 4096' 2.85 ± 0.05 times faster than './spectralnorm.exe 4096'
On the authorâs machine, the program scales reasonably well up to 4 domains but performs worse with 8 domains. Recall that the machine only has 4 physical cores. Debugging and fixing this performance issue is beyond the scope of this tutorial.
An important aspect of the scalability of parallel OCaml programs is the scalability of the garbage collector (GC). The OCaml GC is designed to have both low latency and good parallel scalability. OCaml has a generational garbage collector with a small minor heap and a large major heap. New objects (upto a certain size) are allocated in the minor heap. Each domain has its own domain-local minor heap arena into which new objects are allocated without synchronising with the other domains. When a domain exhausts its minor heap arena, it calls for a stop-the-world collection of the minor heaps. In the stop-the-world section, all the domains collect their minor heap arenas in parallel evacuating the survivors to the major heap.
For the major heap, each domain maintains domain-local, size-segmented pools of memory into which large objects and survivors from the minor collection are allocated. Having domain-local pools avoids synchronisation for most major heap allocations. The major heap is collected by a concurrent mark-and-sweep algorithm that involves a few short stop-the-world pauses for each major cycle.
Overall, the users should expect the garbage collector to scale well with increasing number of domains, with the latency remaining low. For more information on the design and evaluation of the garbage collector, please have a look at the ICFP 2020 paper on Retrofitting Parallelism onto OCaml.
Modern processors and compilers aggressively optimise programs. These optimisations accelerate without otherwise affecting sequential programs, but cause surprising behaviours to be visible in parallel programs. To benefit from these optimisations, OCaml adopts a relaxed memory model that precisely specifies which of these relaxed behaviours programs may observe. While these models are difficult to program against directly, the OCaml memory model provides recipes that retain the simplicity of sequential reasoning.
Firstly, immutable values may be freely shared between multiple domains and may be accessed in parallel. For mutable data structures such as reference cells, arrays and mutable record fields, programmers should avoid data races. Reference cells, arrays and mutable record fields are said to be non-atomic data structures. A data race is said to occur when two domains concurrently access a non-atomic memory location without synchronisation and at least one of the accesses is a write. OCaml provides a number of ways to introduce synchronisation including atomic variables (section 9.7) and mutexes (section 9.5).
Importantly, for data race free (DRF) programs, OCaml provides sequentially consistent (SC) semantics â the observed behaviour of such programs can be explained by the interleaving of operations from different domains. This property is known as DRF-SC guarantee. Moreover, in OCaml, DRF-SC guarantee is modular â if a part of a program is data race free, then the OCaml memory model ensures that those parts have sequential consistency despite other parts of the program having data races. Even for programs with data races, OCaml provides strong guarantees. While the user may observe non sequentially consistent behaviours, there are no crashes.
For more details on the relaxed behaviours in the presence of data races, please have a look at the chapter on the hard bits of the memory model (chapter 10).
Domains may perform blocking synchronisation with the help of Mutex, Condition and Semaphore modules. These modules are the same as those used to synchronise threads created by the threads library (chapter 33). For clarity, in the rest of this chapter, we shall call the threads created by the threads library as systhreads. The following program implements a concurrent stack using mutex and condition variables.
The concurrent stack is implemented using a record with three fields: a mutable field contents which stores the elements in the stack, a mutex to control access to the contents field, and a condition variable nonempty, which is used to signal blocked domains waiting for the stack to become non-empty.
The push operation locks the mutex, updates the contents field with a new list whose head is the element being pushed and the tail is the old list. The condition variable nonempty is signalled while the lock is held in order to wake up any domains waiting on this condition. If there are waiting domains, one of the domains is woken up. If there are none, then the signal operation has no effect.
The pop operation locks the mutex and checks whether the stack is empty. If so, the calling domain waits on the condition variable nonempty using the wait primitive. The wait call atomically suspends the execution of the current domain and unlocks the mutex. When this domain is woken up again (when the wait call returns), it holds the lock on mutex. The domain tries to read the contents of the stack again. If the pop operation sees that the stack is non-empty, it updates the contents to the tail of the old list, and returns the head.
The use of mutex to control access to the shared resource contents introduces sufficient synchronisation between multiple domains using the stack. Hence, there are no data races when multiple domains use the stack in parallel.
How do systhreads interact with domains? The systhreads created on a particular domain remain pinned to that domain. Only one systhread at a time is allowed to run OCaml code on a particular domain. However, systhreads belonging to a particular domain may run C library or system code in parallel. Systhreads belonging to different domains may execute in parallel.
When using systhreads, the thread created for executing the computation given to Domain.spawn is also treated as a systhread. For example, the following program creates in total two domains (including the initial domain) with two systhreads each (including the initial systhread for each of the domains).
(* dom_thr.ml *) let m = Mutex.create () let r = ref None (* protected by m *) let task () = let my_thr_id = Thread.(id (self ())) in let my_dom_id :> int = Domain.self () in Mutex.lock m; begin match !r with | None -> Printf.printf "Thread %d running on domain %d saw initial write\n%!" my_thr_id my_dom_id | Some their_thr_id -> Printf.printf "Thread %d running on domain %d saw the write by thread %d\n%!" my_thr_id my_dom_id their_thr_id; end; r := Some my_thr_id; Mutex.unlock m let task' () = let t = Thread.create task () in task (); Thread.join t let main () = let d = Domain.spawn task' in task' (); Domain.join d let _ = main ()
$ ocamlopt -I +threads unix.cmxa threads.cmxa -o dom_thr.exe dom_thr.ml $ ./dom_thr.exe Thread 1 running on domain 1 saw initial write Thread 0 running on domain 0 saw the write by thread 1 Thread 2 running on domain 1 saw the write by thread 0 Thread 3 running on domain 0 saw the write by thread 2
This program uses a shared reference cell protected by a mutex to communicate between the different systhreads running on two different domains. The systhread identifiers uniquely identify systhreads in the program. The initial domain gets the domain id and the thread id as 0. The newly spawned domain gets domain id as 1.
During parallel execution with multiple domains, C code running on a domain may run in parallel with any C code running in other domains even if neither of them has released the âdomain lockâ. Prior to OCaml 5.0, C bindings may have assumed that if the OCaml runtime lock is not released, then it would be safe to manipulate global C state (e.g. initialise a function-local static value). This is no longer true in the presence of parallel execution with multiple domains.
Mutex, condition variables and semaphores are used to implement blocking synchronisation between domains. For non-blocking synchronisation, OCaml provides Atomic variables. As the name suggests, non-blocking synchronisation does not provide mechanisms for suspending and waking up domains. On the other hand, primitives used in non-blocking synchronisation are often compiled to atomic read-modify-write primitives that the hardware provides. As an example, the following program increments a non-atomic counter and an atomic counter in parallel.
$ ocamlopt -o incr.exe incr.ml $ ./incr.exe 1_000_000 Non-atomic ref count: 1187193 Atomic ref count: 2000000
Observe that the result from using the non-atomic counter is lower than what one would naively expect. This is because the non-atomic incr function is equivalent to:
Observe that the load and the store are two separate operations, and the increment operation as a whole is not performed atomically. When two domains execute this code in parallel, both of them may read the same value of the counter curr and update it to curr + 1. Hence, instead of two increments, the effect will be that of a single increment. On the other hand, the atomic counter performs the load and the store atomically with the help of hardware support for atomicity. The atomic counter returns the expected result.
The atomic variables can be used for low-level synchronisation between the domains. The following example uses an atomic variable to exchange a message between two domains.
While the sender and the receiver compete to access r, this is not a data race since r is an atomic reference.
The Atomic module is used to implement non-blocking, lock-free data structures. The following program implements a lock-free stack.
The atomic stack is represented by an atomic reference that holds a list. The push and pop operations use the compare_and_set primitive to attempt to atomically update the atomic reference. The expression compare_and_set r seen v sets the value of r to v if and only if its current value is physically equal to seen. Importantly, the comparison and the update occur atomically. The expression evaluates to true if the comparison succeeded (and the update happened) and false otherwise.
If the compare_and_set fails, then some other domain is also attempting to update the atomic reference at the same time. In this case, the push and pop operations call Domain.cpu_relax to back off for a short duration allowing competing domains to make progress before retrying the failed operation. This lock-free stack implementation is also known as Treiber stack.
This chapter describes the details of OCaml relaxed memory model. The relaxed memory model describes what values an OCaml program is allowed to witness when reading a memory location. If you are interested in high-level parallel programming in OCaml, please have a look at the parallel programming chapter 9.
This chapter is aimed at experts who would like to understand the details of the OCaml memory model from a practitionerâs perspective. For a formal definition of the OCaml memory model, its guarantees and the compilation to hardware memory models, please have a look at the PLDI 2018 paper on Bounding Data Races in Space and Time. The memory model presented in this chapter is an extension of the one presented in the PLDI 2018 paper. This chapter also covers some pragmatic aspects of the memory model that are not covered in the paper.
The simplest memory model that we could give to our programs is sequential consistency. Under sequential consistency, the values observed by the program can be explained through some interleaving of the operations from different domains in the program. For example, consider the following program with two domains d1 and d2 executing in parallel:
The reference cells a and b are initially 1. The user may observe r1 = 2, r2 = 0, r3 = 2 if the write to b in d2 occurred before the read of b in d1. Here, the observed behaviour can be explained in terms of interleaving of the operations from different domains.
Let us now assume that a and b are aliases of each other.
In the above program, the variables ab, a and b refer to the same reference cell. One would expect that the assertion in the main function will never fail. The reasoning is that if r2 is 0, then the write in d2 occurred before the read of b in d1. Given that a and b are aliases, the second read of a in d1 should also return 0.
Surprisingly, this assertion may fail in OCaml due to compiler optimisations. The OCaml compiler observes the common sub-expression !a * 2 in d1 and optimises the program to:
This optimisation is known as the common sub-expression elimination (CSE). Such optimisations are valid and necessary for good performance, and do not change the sequential meaning of the program. However, CSE breaks sequential reasoning.
In the optimized program above, even if the write to b in d2 occurs between the first and the second reads in d1, the program will observe the value 2 for r3, causing the assertion to fail. The observed behaviour cannot be explained by interleaving of operations from different domains in the source program. Thus, CSE optimization is said to be invalid under sequential consistency.
One way to explain the observed behaviour is as if the operations performed on a domain were reordered. For example, if the second and the third reads from d2 were reordered,
then we can explain the observed behaviour (2,0,2) returned by d1.
The other source of reordering is by the hardware. Modern hardware architectures have complex cache hierarchies with multiple levels of cache. While cache coherence ensures that reads and writes to a single memory location respect sequential consistency, the guarantees on programs that operate on different memory locations are much weaker. Consider the following program:
Under sequential consistency, we would never expect the assertion to fail. However, even on x86, which offers much stronger guarantees than ARM, the writes performed at a CPU core are not immediately published to all of the other cores. Since a and b are different memory locations, the reads of a and b may both witness the initial values, leading to the assertion failure.
This behaviour can be explained if a load is allowed to be reordered before a preceding store to a different memory location. This reordering can happen due to the presence of in-core store-buffers on modern processors. Each core effectively has a FIFO buffer of pending writes to avoid the need to block while a write completes. The writes to a and b may be in the store-buffers of cores c1 and c2 running the domains d1 and d2, respectively. The reads of b and a running on the cores c1 and c2, respectively, will not see the writes if the writes have not propagated from the buffers to the main memory.
The aim of the OCaml relaxed memory model is to precisely describe which orders are preserved by the OCaml program. The compiler and the hardware are free to optimize the program as long as they respect the ordering guarantees of the memory model. While programming directly under the relaxed memory model is difficult, the memory model also describes the conditions under which a program will only exhibit sequentially consistent behaviours. This guarantee is known as data race freedom implies sequential consistency (DRF-SC). In this section, we shall describe this guarantee. In order to do this, we first need a number of definitions.
OCaml classifies memory locations into atomic and non-atomic locations. Reference cells, array fields and mutable record fields are non-atomic memory locations. Immutable objects are non-atomic locations with an initialising write but no further updates. Atomic memory locations are those that are created using the Atomic module.
Let us imagine that the OCaml programs are executed by an abstract machine that executes one action at a time, arbitrarily picking one of the available domains at each step. We classify actions into two: inter-domain and intra-domain. An inter-domain action is one which can be observed and be influenced by actions on other domains. There are several inter-domain actions:
On the other hand, intra-domain actions can neither be observed nor influence the execution of other domains. Examples include evaluating an arithmetic expression, calling a function, etc. The memory model specification ignores such intra-domain actions. In the sequel, we use the term action to indicate inter-domain actions.
A totally ordered list of actions executed by the abstract machine is called an execution trace. There might be several possible execution traces for a given program due to non-determinism.
For a given execution trace, we define an irreflexive, transitive happens-before relation that captures the causality between actions in the OCaml program. The happens-before relation is defined as the smallest transitive relation satisfying the following properties:
In a given trace, two actions are said to be conflicting if they access the same non-atomic location, at least one is a write and neither is an initialising write to that location.
We say that a program has a data race if there exists some execution trace of the program with two conflicting actions and there does not exist a happens-before relationship between the conflicting accesses. A program without data races is said to be correctly synchronised.
DRF-SC guarantee: A program without data races will only exhibit sequentially consistent behaviours.
DRF-SC is a strong guarantee for the programmers. Programmers can use sequential reasoning i.e., reasoning by executing one inter-domain action after the other, to identify whether their program has a data race. In particular, they do not need to reason about reorderings described in sectionââ10.1 in order to determine whether their program has a data race. Once the determination that a particular program is data race free is made, they do not need to worry about reorderings in their code.
In this section, we will look at examples of using DRF-SC for program reasoning. In this section, we will use the functions with names dN to represent domains executing in parallel with other domains. That is, we assume that there is a main function that runs the dN functions in parallel as follows:
let main () = let h1 = Domain.spawn d1 in let h2 = Domain.spawn d2 in ... ignore @@ Domain.join h1; ignore @@ Domain.join h2
Here is a simple example with a data race:
r is a non-atomic reference. The two domains race to access the reference, and d1 is a write. Since there is no happens-before relationship between the conflicting accesses, there is a data race.
Both of the programs that we had seen in the sectionââ10.1 have data races. It is no surprise that they exhibit non sequentially consistent behaviours.
Accessing disjoint array indices and fields of a record in parallel is not a data race. For example,
do not have data races.
Races on atomic locations do not lead to a data race.
Atomic variables may be used for implementing non-blocking communication between domains.
Observe that the actions a and d write and read from the same non-atomic location msg, respectively, and hence are conflicting. We need to establish that a and d have a happens-before relationship in order to show that this program does not have a data race.
The action a precedes b in program order, and hence, a happens-before b. Similarly, c happens-before d. If d2 observes the atomic variable flag to be true, then b precedes c in happens-before order. Since happens-before is transitive, the conflicting actions a and d are in happens-before order. If d2 observes the flag to be false, then the read of msg is not done. Hence, there is no conflicting access in this execution trace. Hence, the program does not have a data race.
The following modified version of the message passing program does have a data race.
The domain d2 now unconditionally reads the non-atomic reference msg. Consider the execution trace:
Atomic.get flag; (* c *) !msg; (* d *) msg := 42; (* a *) Atomic.set flag true (* b *)
In this trace, d and a are conflicting operations. But there is no happens-before relationship between them. Hence, this program has a data race.
The OCaml memory model offers strong guarantees even for programs with data races. It offers what is known as local data race freedom sequential consistency (LDRF-SC) guarantee. A formal definition of this property is beyond the scope of this manual chapter. Interested readers are encouraged to read the PLDI 2018 paper on Bounding Data Races in Space and Time.
Informally, LDRF-SC says that the data race free parts of the program remain sequentially consistent. That is, even if the program has data races, those parts of the program that are disjoint from the parts with data races are amenable to sequential reasoning.
Consider the following snippet:
Observe that c is a newly allocated reference. Can the read of c return a value which is not 42? That is, can a ever be not 42? Surprisingly, in the C++ and Java memory models, the answer is yes. With the C++ memory model, if the program has a data race, even in unrelated parts, then the semantics is undefined. If this snippet were linked with a library that had a data race, then, under the C++ memory model, the read may return any value. Since data races on unrelated locations can affect program behaviour, we say that C++ memory model is not bounded in space.
Unlike C++, Java memory model is bounded in space. But Java memory model is not bounded in time; data races in the future will affect the past behaviour. For example, consider the translation of this example to Java. We assume a prior definition of Class c {int x;} and a shared non-volatile variable C g. Now the snippet may be part of a larger program with parallel threads:
(* Thread 1 *) C c = new C(); c.x = 42; a = c.x; g = c; (* Thread 2 *) g.x = 7;
The read of c.x and the write of g in the first thread are done on separate memory locations. Hence, the Java memory model allows them to be reordered. As a result, the write in the second thread may occur before the read of c.x, and hence, c.x returns 7.
The OCaml equivalent of the Java code above is:
Observe that there is a data race on both g and c. Consider only the first three instructions in snippet:
let c = ref 0 in c := 42; let a = !c in ...
The OCaml memory model is bounded both in space and time. The only memory location here is c. Reasoning only about this snippet, there is neither the data race in space (the race on g) nor in time (the future race on c). Hence, the snippet will have sequentially consistent behaviour, and the value returned by !c will be 42.
The OCaml memory model guarantees that even for programs with data races, memory safety is preserved. While programs with data races may observe non-sequentially consistent behaviours, they will not crash.
In this section, we describe the semantics of the OCaml memory model. A formal definition of the operational view of the memory model is presented in section 3 of the PLDI 2018 paper on Bounding Data Races in Space and Time. This section presents an informal description of the memory model with the help of an example.
Given an OCaml program, which may possibly contain data races, the operational semantics tells you the values that may be observed by the read of a memory location. For simplicity, we restrict the intra-thread actions to just the accesses to atomic and non-atomic locations, ignoring domain spawn and join operations, and the operations on mutexes.
We describe the semantics of the OCaml memory model in a straightforward small-step operational manner. That is, the semantics is described by an abstract machine that executes one action at a time, arbitrarily picking one of the available domains at each step. This is similar to the abstract machine that we had used to describe the happens-before relationship in sectionââ10.2.2.
In the semantics, we model non-atomic locations as finite maps from timestamps t to values v. We take timestamps to be rational numbers. The timestamps are totally ordered but dense; there is a timestamp between any two others.
For example,
a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t8 -> 7]
represents three non-atomic locations a, b and c and their histories. The location a has two writes at timestamps t1 and t2 with values 1 and 2, respectively. When we write a: [t1 -> 1; t2 -> 2], we assume that t1 < t2. We assume that the locations are initialised with a history that has a single entry at timestamp 0 that maps to the initial value.
Each domain is equipped with a frontier, which is a map from non-atomic locations to timestamps. Intuitively, each domainâs frontier records, for each non-atomic location, the latest write known to the thread. More recent writes may have occurred, but are not guaranteed to be visible.
For example,
d1: [a -> t1; b -> t3; c -> t7] d2: [a -> t1; b -> t4; c -> t7]
represents two domains d1 and d2 and their frontiers.
Let us now define the semantics of non-atomic reads and writes. Suppose domain d1 performs the read of b. For non-atomic reads, the domains may read an arbitrary element of the history for that location, as long as it is not older than the timestamp in the domainsâs frontier. In this case, since d1 frontier at b is at t3, the read may return the value 3, 4 or 5. A non-atomic read does not change the frontier of the current domain.
Suppose domain d2 writes the value 10 to c (c := 10). We pick a new timestamp t9 for this write such that it is later than d2âs frontier at c. Note a subtlety here: this new timestamp might not be later than everything else in the history, but merely later than any other write known to the writing domain. Hence, t9 may be inserted in câs history either (a) between t7 and t8 or (b) after t8. Let us pick the former option for our discussion. Since the new write appears after all the writes known by the domain d2 to the location c, d2âs frontier at c is also updated. The new state of the abstract machine is:
(* Non-atomic locations *) a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t9 -> 10; t8 -> 7] (* new write at t9 *) (* Domains *) d1: [a -> t1; b -> t3; c -> t7] d2: [a -> t1; b -> t4; c -> t9] (* frontier updated at c *)
Atomic locations carry not only values but also synchronization information. We model atomic locations as a pair of the value held by that location and a frontier. The frontier models the synchronization information, which is merged with the frontiers of threads that operate on the location. In this way, non-atomic writes made by one thread can become known to another by communicating via an atomic location.
For example,
(* Atomic locations *) A: 10, [a -> t1; b -> t5; c -> t7] B: 5, [a -> t2; b -> t4; c -> t6]
shows two atomic variables A and B with values 10 and 5, respectively, and frontiers of their own. We use upper-case variable names to indicate atomic locations.
During atomic reads, the frontier of the location is merged into that of the domain performing the read. For example, suppose d1 reads B. The read returns 5, and d1âs frontier updated by merging it with Bâs frontier, choosing the later timestamp for each location. The abstract machine state before the atomic read is:
(* Non-atomic locations *) a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t9 -> 10; t8 -> 7] (* Domains *) d1: [a -> t1; b -> t3; c -> t7] d2: [a -> t1; b -> t4; c -> t9] (* Atomic locations *) A: 10, [a -> t1; b -> t5; c -> t7] B: 5, [a -> t2; b -> t4; c -> t6]
As a result of the atomic read, the abstract machine state is updated to:
(* Non-atomic locations *) a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t9 -> 10; t8 -> 7] (* Domains *) d1: [a -> t2; b -> t4; c -> t7] (* frontier updated at a and b *) d2: [a -> t1; b -> t4; c -> t9] (* Atomic locations *) A: 10, [a -> t1; b -> t5; c -> t7] B: 5, [a -> t2; b -> t4; c -> t6]
During atomic writes, the value held by the atomic location is updated. The frontiers of both the writing domain and that of the location being written to are updated to the merge of the two frontiers. For example, if d2 writes 20 to A in the current machine state, the machine state is updated to:
(* Non-atomic locations *) a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t9 -> 10; t8 -> 7] (* Domains *) d1: [a -> t2; b -> t4; c -> t7] d2: [a -> t1; b -> t5; c -> t9] (* frontier updated at b *) (* Atomic locations *) A: 20, [a -> t1; b -> t5; c -> t9] (* value updated. frontier updated at c. *) B: 5, [a -> t2; b -> t4; c -> t6]
Let us revisit an example from earlier (section 10.1).
This program has a data race on a and b, and hence, the program may exhibit non sequentially consistent behaviour. Let us use the semantics to show that the program may exhibit r1 = 0 && r2 = 0.
The initial state of the abstract machine is:
(* Non-atomic locations *) a: [t0 -> 0] b: [t1 -> 0] (* Domains *) d1: [a -> t0; b -> t1] d2: [a -> t0; b -> t1]
There are several possible schedules for executing this program. Let us consider the following schedule:
1: a := 1 @ d1 2: b := 1 @ d2 3: !b @ d1 4: !a @ d2
After the first action a:=1 by d1, the machine state is:
(* Non-atomic locations *) a: [t0 -> 0; t2 -> 1] (* new write at t2 *) b: [t1 -> 0] (* Domains *) d1: [a -> t2; b -> t1] (* frontier updated at a *) d2: [a -> t0; b -> t1]
After the second action b:=1 by d2, the machine state is:
(* Non-atomic locations *) a: [t0 -> 0; t2 -> 1] b: [t1 -> 0; t3 -> 1] (* new write at t3 *) (* Domains *) d1: [a -> t2; b -> t1] d2: [a -> t0; b -> t3] (* frontier updated at b *)
Now, for the third action !b by d1, observe that d1âs frontier at b is at t1. Hence, the read may return either 0 or 1. Let us assume that it returns 0. The machine state is not updated by the non-atomic read.
Similarly, for the fourth action !a by d2, d2âs frontier at a is at t0. Hence, this read may also return either 0 or 1. Let us assume that it returns 0. Hence, the assertion in the original program, assert (not (r1 = 0 && r2 = 0)), will fail for this particular execution.
There are certain operations which are not memory model compliant.
PartââII |
This document is intended as a reference manual for the OCaml language. It lists the language constructs, and gives their precise syntax and informal semantics. It is by no means a tutorial introduction to the language. A good working knowledge of OCaml is assumed.
No attempt has been made at mathematical rigor: words are employed with their intuitive meaning, without further definition. As a consequence, the typing rules have been left out, by lack of the mathematical framework required to express them, while they are definitely part of a full formal definition of the language.
The syntax of the language is given in BNF-like notation. Terminal symbols are set in typewriter font (like this). Non-terminal symbols are set in italic font (like that). Square brackets [âŠ] denote optional components. Curly brackets {âŠ} denotes zero, one or several repetitions of the enclosed components. Curly brackets with a trailing plus sign {âŠ}+ denote one or several repetitions of the enclosed components. Parentheses (âŠ) denote grouping.
The following characters are considered as blanks: space, horizontal tabulation, carriage return, line feed and form feed. Blanks are ignored, but they separate adjacent identifiers, literals and keywords that would otherwise be confused as one single identifier, literal or keyword.
Comments are introduced by the two characters (*, with no intervening blanks, and terminated by the characters *), with no intervening blanks. Comments are treated as blank characters. Comments do not occur inside string or character literals. Nested comments are handled correctly.
|
Identifiers are sequences of letters, digits, _ (the underscore character), and ' (the single quote), starting with a letter or an underscore. Letters contain at least the 52 lowercase and uppercase letters from the ASCII set. The current implementation also recognizes as letters some characters from the ISO 8859-1 set (characters 192â214 and 216â222 as uppercase letters; characters 223â246 and 248â255 as lowercase letters). This feature is deprecated and should be avoided for future compatibility.
All characters in an identifier are meaningful. The current implementation accepts identifiers up to 16000000 characters in length.
In many places, OCaml makes a distinction between capitalized identifiers and identifiers that begin with a lowercase letter. The underscore character is considered a lowercase letter for this purpose.
|
An integer literal is a sequence of one or more digits, optionally preceded by a minus sign. By default, integer literals are in decimal (radix 10). The following prefixes select a different radix:
Prefix | Radix |
0x, 0X | hexadecimal (radix 16) |
0o, 0O | octal (radix 8) |
0b, 0B | binary (radix 2) |
(The initial 0 is the digit zero; the O for octal is the letter O.) An integer literal can be followed by one of the letters l, L or n to indicate that this integer has type int32, int64 or nativeint respectively, instead of the default type int for integer literals. The interpretation of integer literals that fall outside the range of representable integer values is undefined.
For convenience and readability, underscore characters (_) are accepted (and ignored) within integer literals.
|
Floating-point decimal literals consist in an integer part, a fractional part and an exponent part. The integer part is a sequence of one or more digits, optionally preceded by a minus sign. The fractional part is a decimal point followed by zero, one or more digits. The exponent part is the character e or E followed by an optional + or - sign, followed by one or more digits. It is interpreted as a power of 10. The fractional part or the exponent part can be omitted but not both, to avoid ambiguity with integer literals. The interpretation of floating-point literals that fall outside the range of representable floating-point values is undefined.
Floating-point hexadecimal literals are denoted with the 0x or 0X prefix. The syntax is similar to that of floating-point decimal literals, with the following differences. The integer part and the fractional part use hexadecimal digits. The exponent part starts with the character p or P. It is written in decimal and interpreted as a power of 2.
For convenience and readability, underscore characters (_) are accepted (and ignored) within floating-point literals.
|
Character literals are delimited by ' (single quote) characters. The two single quotes enclose either one character different from ' and \, or one of the escape sequences below:
Sequence | Character denoted |
\\ | backslash (\) |
\" | double quote (") |
\' | single quote (') |
\n | linefeed (LF) |
\r | carriage return (CR) |
\t | horizontal tabulation (TAB) |
\b | backspace (BS) |
\space | space (SPC) |
\ddd | the character with ASCII code ddd in decimal |
\xhh | the character with ASCII code hh in hexadecimal |
\oooo | the character with ASCII code ooo in octal |
|
String literals are delimited by " (double quote) characters. The two double quotes enclose a sequence of either characters different from " and \, or escape sequences from the table given above for character literals, or a Unicode character escape sequence.
A Unicode character escape sequence is substituted by the UTF-8 encoding of the specified Unicode scalar value. The Unicode scalar value, an integer in the ranges 0x0000...0xD7FF or 0xE000...0x10FFFF, is defined using 1 to 6 hexadecimal digits; leading zeros are allowed.
To allow splitting long string literals across lines, the sequence \newlineââspaces-or-tabs (a backslash at the end of a line followed by any number of spaces and horizontal tabulations at the beginning of the next line) is ignored inside string literals.
Quoted string literals provide an alternative lexical syntax for string literals. They are useful to represent strings of arbitrary content without escaping. Quoted strings are delimited by a matching pair of { quoted-string-id | and | quoted-string-id } with the same quoted-string-id on both sides. Quoted strings do not interpret any character in a special way but requires that the sequence | quoted-string-id } does not occur in the string itself. The identifier quoted-string-id is a (possibly empty) sequence of lowercase letters and underscores that can be freely chosen to avoid such issue.
The current implementation places practically no restrictions on the length of string literals.
To avoid ambiguities, naming labels in expressions cannot just be defined syntactically as the sequence of the three tokens ~, ident and :, and have to be defined at the lexical level.
|
Naming labels come in two flavours: label for normal arguments and optlabel for optional ones. They are simply distinguished by their first character, either ~ or ?.
Despite label and optlabel being lexical entities in expressions, their expansions ~ label-name : and ? label-name : will be used in grammars, for the sake of readability. Note also that inside type expressions, this expansion can be taken literally, i.e. there are really 3 tokens, with optional blanks between them.
|
See also the following language extensions: extension operators, extended indexing operators, and binding operators.
Sequences of âoperator charactersâ, such as <=> or !!, are read as a single token from the infix-symbol or prefix-symbol class. These symbols are parsed as prefix and infix operators inside expressions, but otherwise behave like normal identifiers.
The identifiers below are reserved as keywords, and cannot be employed otherwise:
and as assert asr begin class constraint do done downto else end exception external false for fun function functor if in include inherit initializer land lazy let lor lsl lsr lxor match method mod module mutable new nonrec object of open or private rec sig struct then to true try type val virtual when while with
The following character sequences are also keywords:
!= # & && ' ( ) * + , - -. -> . .. .~ : :: := :> ; ;; < <- = > >] >} ? [ [< [> [| ] _ ` { {< | |] || } ~
Note that the following identifiers are keywords of the now unmaintained Camlp4 system and should be avoided for backwards compatibility reasons.
parser value $ $$ $: <: << >> ??
Lexical ambiguities are resolved according to the âlongest matchâ rule: when a character sequence can be decomposed into two tokens in several different ways, the decomposition retained is the one with the longest first token.
|
Preprocessors that generate OCaml source code can insert line number directives in their output so that error messages produced by the compiler contain line numbers and file names referring to the source file before preprocessing, instead of after preprocessing. A line number directive starts at the beginning of a line, is composed of a # (sharp sign), followed by a positive integer (the source line number), followed by a character string (the source file name). Line number directives are treated as blanks during lexical analysis.
This section describes the kinds of values that are manipulated by OCaml programs.
Integer values are integer numbers from â230 to 230â1, that is â1073741824 to 1073741823. The implementation may support a wider range of integer values: on 64-bit platforms, the current implementation supports integers ranging from â262 to 262â1.
Floating-point values are numbers in floating-point representation. The current implementation uses double-precision floating-point numbers conforming to the IEEE 754 standard, with 53 bits of mantissa and an exponent ranging from â1022 to 1023.
Character values are represented as 8-bit integers between 0 and 255. Character codes between 0 and 127 are interpreted following the ASCII standard. The current implementation interprets character codes between 128 and 255 following the ISO 8859-1 standard.
String values are finite sequences of characters. The current implementation supports strings containing up to 224 â 5 characters (16777211 characters); on 64-bit platforms, the limit is 257 â 9.
Tuples of values are written (v1, âŠ, vn), standing for the n-tuple of values v1 to vn. The current implementation supports tuple of up to 222 â 1 elements (4194303 elements).
Record values are labeled tuples of values. The record value written { field1 = v1; âŠ; fieldn = vn } associates the value vi to the record field fieldi, for i = 1 ⊠n. The current implementation supports records with up to 222 â 1 fields (4194303 fields).
Arrays are finite, variable-sized sequences of values of the same type. The current implementation supports arrays containing up to 222 â 1 elements (4194303 elements) unless the elements are floating-point numbers (2097151 elements in this case); on 64-bit platforms, the limit is 254 â 1 for all arrays.
Variant values are either a constant constructor, or a non-constant constructor applied to a number of values. The former case is written constr; the latter case is written constr (v1, ... , vn ), where the vi are said to be the arguments of the non-constant constructor constr. The parentheses may be omitted if there is only one argument.
The following constants are treated like built-in constant constructors:
Constant | Constructor |
false | the boolean false |
true | the boolean true |
() | the âunitâ value |
[] | the empty list |
The current implementation limits each variant type to have at most 246 non-constant constructors and 230â1 constant constructors.
Polymorphic variants are an alternate form of variant values, not belonging explicitly to a predefined variant type, and following specific typing rules. They can be either constant, written `tag-name, or non-constant, written `tag-name(v).
Functional values are mappings from values to values.
Objects are composed of a hidden internal state which is a record of instance variables, and a set of methods for accessing and modifying these variables. The structure of an object is described by the toplevel class that created it.
Identifiers are used to give names to several classes of language objects and refer to these objects by name later:
These eleven name spaces are distinguished both by the context and by the capitalization of the identifier: whether the first letter of the identifier is in lowercase (written lowercase-ident below) or in uppercase (written capitalized-ident). Underscore is considered a lowercase letter for this purpose.
|
See also the following language extension: extended indexing operators.
As shown above, prefix and infix symbols as well as some keywords can be used as value names, provided they are written between parentheses. The capitalization rules are summarized in the table below.
Name space | Case of first letter |
Values | lowercase |
Constructors | uppercase |
Labels | lowercase |
Polymorphic variant tags | uppercase |
Exceptions | uppercase |
Type constructors | lowercase |
Record fields | lowercase |
Classes | lowercase |
Instance variables | lowercase |
Methods | lowercase |
Modules | uppercase |
Module types | any |
Note on polymorphic variant tags: the current implementation accepts lowercase variant tags in addition to capitalized variant tags, but we suggest you avoid lowercase variant tags for portability and compatibility with future OCaml versions.
|
A named object can be referred to either by its name (following the usual static scoping rules for names) or by an access path prefix . name, where prefix designates a module and name is the name of an object defined in that module. The first component of the path, prefix, is either a simple module name or an access path name1 . name2 âŠ, in case the defining module is itself nested inside other modules. For referring to type constructors, module types, or class types, the prefix can also contain simple functor applications (as in the syntactic class extended-module-path above) in case the defining module is the result of a functor application.
Label names, tag names, method names and instance variable names need not be qualified: the former three are global labels, while the latter are local to a class.
|
See also the following language extensions: first-class modules, attributes and extension nodes.
The table below shows the relative precedences and associativity of operators and non-closed type constructions. The constructions with higher precedences come first.
Operator | Associativity |
Type constructor application | â |
# | â |
* | â |
-> | right |
as | â |
Type expressions denote types in definitions of data types as well as in type constraints over patterns and expressions.
The type expression ' ident stands for the type variable named ident. The type expression _ stands for either an anonymous type variable or anonymous type parameters. In data type definitions, type variables are names for the data type parameters. In type constraints, they represent unspecified types that can be instantiated by any type to satisfy the type constraint. In general the scope of a named type variable is the whole top-level phrase where it appears, and it can only be generalized when leaving this scope. Anonymous variables have no such restriction. In the following cases, the scope of named type variables is restricted to the type expression where they appear: 1) for universal (explicitly polymorphic) type variables; 2) for type variables that only appear in public method specifications (as those variables will be made universal, as described in sectionââ11.9.1); 3) for variables used as aliases, when the type they are aliased to would be invalid in the scope of the enclosing definition (i.e. when it contains free universal type variables, or locally defined types.)
The type expression ( typexpr ) denotes the same type as typexpr.
The type expression typexpr1 -> typexpr2 denotes the type of functions mapping arguments of type typexpr1 to results of type typexpr2.
label-name : typexpr1 -> typexpr2 denotes the same function type, but the argument is labeled label.
? label-name : typexpr1 -> typexpr2 denotes the type of functions mapping an optional labeled argument of type typexpr1 to results of type typexpr2. That is, the physical type of the function will be typexpr1 option -> typexpr2.
The type expression typexpr1 * ⊠* typexprn denotes the type of tuples whose elements belong to types typexpr1, ⊠typexprn respectively.
Type constructors with no parameter, as in typeconstr, are type expressions.
The type expression typexpr typeconstr, where typeconstr is a type constructor with one parameter, denotes the application of the unary type constructor typeconstr to the type typexpr.
The type expression (typexpr1,âŠ,typexprn) typeconstr, where typeconstr is a type constructor with n parameters, denotes the application of the n-ary type constructor typeconstr to the types typexpr1 through typexprn.
In the type expression _ typeconstr , the anonymous type expression _ stands in for anonymous type parameters and is equivalent to (_, âŠ,_) with as many repetitions of _ as the arity of typeconstr.
The type expression typexpr as ' ident denotes the same type as typexpr, and also binds the type variable ident to type typexpr both in typexpr and in other types. In general the scope of an alias is the same as for a named type variable, and covers the whole enclosing definition. If the type variable ident actually occurs in typexpr, a recursive type is created. Recursive types for which there exists a recursive path that does not contain an object or polymorphic variant type constructor are rejected, except when the -rectypes mode is selected.
If ' ident denotes an explicit polymorphic variable, and typexpr denotes either an object or polymorphic variant type, the row variable of typexpr is captured by ' ident, and quantified upon.
|
Polymorphic variant types describe the values a polymorphic variant may take.
The first case is an exact variant type: all possible tags are known, with their associated types, and they can all be present. Its structure is fully known.
The second case is an open variant type, describing a polymorphic variant value: it gives the list of all tags the value could take, with their associated types. This type is still compatible with a variant type containing more tags. A special case is the unknown type, which does not define any tag, and is compatible with any variant type.
The third case is a closed variant type. It gives information about all the possible tags and their associated types, and which tags are known to potentially appear in values. The exact variant type (first case) is just an abbreviation for a closed variant type where all possible tags are also potentially present.
In all three cases, tags may be either specified directly in the `tag-name [of typexpr] form, or indirectly through a type expression, which must expand to an exact variant type, whose tag specifications are inserted in its place.
Full specifications of variant tags are only used for non-exact closed types. They can be understood as a conjunctive type for the argument: it is intended to have all the types enumerated in the specification.
Such conjunctive constraints may be unsatisfiable. In such a case the corresponding tag may not be used in a value of this type. This does not mean that the whole type is not valid: one can still use other available tags. Conjunctive constraints are mainly intended as output from the type checker. When they are used in source programs, unsolvable constraints may cause early failures.
An object type < [method-type { ; method-type }] > is a record of method types.
Each method may have an explicit polymorphic type: { ' ident }+ . typexpr. Explicit polymorphic variables have a local scope, and an explicit polymorphic type can only be unified to an equivalent one, where only the order and names of polymorphic variables may change.
The type < { method-type ; } .. > is the type of an object whose method names and types are described by method-type1, âŠ, method-typen, and possibly some other methods represented by the ellipsis. This ellipsis actually is a special kind of type variable (called row variable in the literature) that stands for any number of extra method types.
The type # classtype-path is a special kind of abbreviation. This abbreviation unifies with the type of any object belonging to a subclass of the class type classtype-path. It is handled in a special way as it usually hides a type variable (an ellipsis, representing the methods that may be added in a subclass). In particular, it vanishes when the ellipsis gets instantiated. Each type expression # classtype-path defines a new type variable, so type # classtype-path -> # classtype-path is usually not the same as type (# classtype-path as ' ident) -> ' ident.
Use of #-types to abbreviate polymorphic variant types is deprecated. If t is an exact variant type then #t translates to [< t], and #t[> `tag1 âŠ`tagk] translates to [< t > `tag1 âŠ`tagk]
There are no type expressions describing (defined) variant types nor record types, since those are always named, i.e. defined before use and referred to by name. Type definitions are described in sectionââ11.8.1.
|
See also the following language extension: extension literals.
The syntactic class of constants comprises literals from the four base types (integers, floating-point numbers, characters, character strings), the integer variants, and constant constructors from both normal and polymorphic variants, as well as the special constants false, true, (), [], and [||], which behave like constant constructors, and begin end, which is equivalent to ().
|
See also the following language extensions: first-class modules, attributes and extension nodes.
The table below shows the relative precedences and associativity of operators and non-closed pattern constructions. The constructions with higher precedences come first.
Operator | Associativity |
.. | â |
lazy (see sectionââ11.6) | â |
Constructor application, Tag application | right |
:: | right |
, | â |
| | left |
as | â |
Patterns are templates that allow selecting data structures of a given shape, and binding identifiers to components of the data structure. This selection operation is called pattern matching; its outcome is either âthis value does not match this patternâ, or âthis value matches this pattern, resulting in the following bindings of names to valuesâ.
A pattern that consists in a value name matches any value, binding the name to the value. The pattern _ also matches any value, but does not bind any name.
Patterns are linear: a variable cannot be bound several times by a given pattern. In particular, there is no way to test for equality between two parts of a data structure using only a pattern:
However, we can use a when guard for this purpose:
A pattern consisting in a constant matches the values that are equal to this constant.
The pattern pattern1 as value-name matches the same values as pattern1. If the matching against pattern1 is successful, the name value-name is bound to the matched value, in addition to the bindings performed by the matching against pattern1.
The pattern ( pattern1 ) matches the same values as pattern1. A type constraint can appear in a parenthesized pattern, as in ( pattern1 : typexpr ). This constraint forces the type of pattern1 to be compatible with typexpr.
The pattern pattern1 | pattern2 represents the logical âorâ of the two patterns pattern1 and pattern2. A value matches pattern1 | pattern2 if it matches pattern1 or pattern2. The two sub-patterns pattern1 and pattern2 must bind exactly the same identifiers to values having the same types. Matching is performed from left to right. More precisely, in case some valueââv matches pattern1 | pattern2, the bindings performed are those of pattern1 when v matches pattern1. Otherwise, valueââv matches pattern2 whose bindings are performed.
The pattern constr ( pattern1 , ⊠, patternn ) matches all variants whose constructor is equal to constr, and whose arguments match pattern1 ⊠patternn. It is a type error if n is not the number of arguments expected by the constructor.
The pattern constr _ matches all variants whose constructor is constr.
The pattern pattern1 :: pattern2 matches non-empty lists whose heads match pattern1, and whose tails match pattern2.
The pattern [ pattern1 ; ⊠; patternn ] matches lists of length n whose elements match pattern1 âŠpatternn, respectively. This pattern behaves like pattern1 :: ⊠:: patternn :: [].
The pattern `tag-name pattern1 matches all polymorphic variants whose tag is equal to tag-name, and whose argument matches pattern1.
If the type [('a,'b,âŠ)] typeconstr = [ `tag-name1 typexpr1 | ⊠| `tag-namen typexprn] is defined, then the pattern #typeconstr is a shorthand for the following or-pattern: ( `tag-name1(_ : typexpr1) | ⊠| `tag-namen(_ : typexprn)). It matches all values of type [< typeconstr ].
The pattern pattern1 , ⊠, patternn matches n-tuples whose components match the patterns pattern1 through patternn. That is, the pattern matches the tuple values (v1, âŠ, vn) such that patterni matches vi for i = 1,⊠, n.
The pattern { field1 [= pattern1] ; ⊠; fieldn [= patternn] } matches records that define at least the fields field1 through fieldn, and such that the value associated to fieldi matches the pattern patterni, for i = 1,⊠, n. A single identifier fieldk stands for fieldk = fieldk , and a single qualified identifier module-path . fieldk stands for module-path . fieldk = fieldk . The record value can define more fields than field1 âŠfieldn; the values associated to these extra fields are not taken into account for matching. Optionally, a record pattern can be terminated by ; _ to convey the fact that not all fields of the record type are listed in the record pattern and that it is intentional. Optional type constraints can be added field by field with { field1 : typexpr1 = pattern1 ;⊠;fieldn : typexprn = patternn } to force the type of fieldk to be compatible with typexprk.
The pattern [| pattern1 ; ⊠; patternn |] matches arrays of length n such that the i-th array element matches the pattern patterni, for i = 1,⊠, n.
The pattern ' c ' .. ' d ' is a shorthand for the pattern
where c1, c2, âŠ, cn are the characters that occur between c and d in the ASCII character set. For instance, the pattern '0'..'9' matches all characters that are digits.
(Introduced in Objective Caml 3.11)
|
The pattern lazy pattern matches a value v of type Lazy.t, provided pattern matches the result of forcing v with Lazy.force. A successful match of a pattern containing lazy sub-patterns forces the corresponding parts of the value being matched, even those that imply no test such as lazy value-name or lazy _. Matching a value with a pattern-matching where some patterns contain lazy sub-patterns may imply forcing parts of the value, even when the pattern selected in the end has no lazy sub-pattern.
For more information, see the description of module Lazy in the standard library (module Lazy).
(Introduced in OCaml 4.02)
A new form of exception pattern, exception pattern , is allowed only as a toplevel pattern or inside a toplevel or-pattern under a match...with pattern-matching (other occurrences are rejected by the type-checker).
Cases with such a toplevel pattern are called âexception casesâ, as opposed to regular âvalue casesâ. Exception cases are applied when the evaluation of the matched expression raises an exception. The exception value is then matched against all the exception cases and re-raised if none of them accept the exception (as with a try...with block). Since the bodies of all exception and value cases are outside the scope of the exception handler, they are all considered to be in tail-position: if the match...with block itself is in tail position in the current function, any function call in tail position in one of the case bodies results in an actual tail call.
A pattern match must contain at least one value case. It is an error if all cases are exceptions, because there would be no code to handle the return of a value.
For patterns, local opens are limited to the module-path.(pattern) construction. This construction locally opens the module referred to by the module path module-path in the scope of the pattern pattern.
When the body of a local open pattern is delimited by [ ], [| |], or { }, the parentheses can be omitted. For example, module-path.[pattern] is equivalent to module-path.([pattern]), and module-path.[| pattern |] is equivalent to module-path.([| pattern |]).
|
|
See also the following language extensions: first-class modules, overriding in open statements, syntax for Bigarray access, attributes, extension nodes and extended indexing operators.
The table below shows the relative precedences and associativity of operators and non-closed constructions. The constructions with higher precedence come first. For infix and prefix symbols, we write â*âŠâ to mean âany symbol starting with *â.
Construction or operator | Associativity |
prefix-symbol | â |
. .( .[ .{ (see sectionââ12.11) | â |
#⊠| left |
function application, constructor application, tag application, assert, lazy | left |
- -. (prefix) | â |
**⊠lsl lsr asr | right |
*⊠/⊠%⊠mod land lor lxor | left |
+⊠-⊠| left |
:: | right |
@⊠^⊠| right |
=⊠<⊠>⊠|⊠&⊠$⊠!= | left |
& && | right |
or || | right |
, | â |
<- := | right |
if | â |
; | right |
let match fun function try | â |
It is simple to test or refresh oneâs understanding:
An expression consisting in a constant evaluates to this constant. For example, 3.14 or [||].
An expression consisting in an access path evaluates to the value bound to this path in the current evaluation environment. The path can be either a value name or an access path to a value component of a module.
The expressions ( expr ) and begin expr end have the same value as expr. The two constructs are semantically equivalent, but it is good style to use begin ⊠end inside control structures:
if ⊠then begin ⊠; ⊠end else begin ⊠; ⊠end
and ( ⊠) for the other grouping situations.
Parenthesized expressions can contain a type constraint, as in ( expr : typexpr ). This constraint forces the type of expr to be compatible with typexpr.
Parenthesized expressions can also contain coercions ( expr [: typexpr] :> typexpr) (see subsectionââ11.7.7 below).
Function application is denoted by juxtaposition of (possibly labeled) expressions. The expression expr argument1 ⊠argumentn evaluates the expression expr and those appearing in argument1 to argumentn. The expression expr must evaluate to a functional value f, which is then applied to the values of argument1, âŠ, argumentn.
The order in which the expressions expr, argument1, âŠ, argumentn are evaluated is not specified.
Arguments and parameters are matched according to their respective labels. Argument order is irrelevant, except among arguments with the same label, or no label.
If a parameter is specified as optional (label prefixed by ?) in the type of expr, the corresponding argument will be automatically wrapped with the constructor Some, except if the argument itself is also prefixed by ?, in which case it is passed as is.
If a non-labeled argument is passed, and its corresponding parameter is preceded by one or several optional parameters, then these parameters are defaulted, i.e. the value None will be passed for them. All other missing parameters (without corresponding argument), both optional and non-optional, will be kept, and the result of the function will still be a function of these missing parameters to the body of f.
In all cases but exact match of order and labels, without optional parameters, the function type should be known at the application point. This can be ensured by adding a type constraint. Principality of the derivation can be checked in the -principal mode.
As a special case, OCaml supports labels-omitted full applications: if the function has a known arity, all the arguments are unlabeled, and their number matches the number of non-optional parameters, then labels are ignored and non-optional parameters are matched in their definition order. Optional arguments are defaulted. This omission of labels is discouraged and results in a warning, see 13.5.1.
Two syntactic forms are provided to define functions. The first form is introduced by the keyword function:
|
This expression evaluates to a functional value with one argument. When this function is applied to a value v, this value is matched against each pattern pattern1 to patternn. If one of these matchings succeeds, that is, if the value v matches the pattern patterni for some i, then the expression expri associated to the selected pattern is evaluated, and its value becomes the value of the function application. The evaluation of expri takes place in an environment enriched by the bindings performed during the matching.
If several patterns match the argument v, the one that occurs first in the function definition is selected. If none of the patterns matches the argument, the exception Match_failure is raised.
The other form of function definition is introduced by the keyword fun:
This expression is equivalent to:
An optional type constraint typexpr can be added before -> to enforce the type of the result to be compatible with the constraint typexpr:
is equivalent to
Beware of the small syntactic difference between a type constraint on the last parameter
and one on the result
The parameter patterns ~lab and ~(lab [: typ]) are shorthands for respectively ~lab:lab and ~lab:(lab [: typ]), and similarly for their optional counterparts.
A function of the form fun ? lab :( pattern = expr0 ) -> expr is equivalent to
where ident is a fresh variable, except that it is unspecified when expr0 is evaluated.
After these two transformations, expressions are of the form
If we ignore labels, which will only be meaningful at function application, this is equivalent to
That is, the fun expression above evaluates to a curried function with n arguments: after applying this function n times to the values v1 ⊠vn, the values will be matched in parallel against the patterns pattern1 ⊠patternn. If the matching succeeds, the function returns the value of expr in an environment enriched by the bindings performed during the matchings. If the matching fails, the exception Match_failure is raised.
The cases of a pattern matching (in the function, match and try constructs) can include guard expressions, which are arbitrary boolean expressions that must evaluate to true for the match case to be selected. Guards occur just before the -> token and are introduced by the when keyword:
|
Matching proceeds as described before, except that if the value matches some pattern patterni which has a guard condi, then the expression condi is evaluated (in an environment enriched by the bindings performed during matching). If condi evaluates to true, then expri is evaluated and its value returned as the result of the matching, as usual. But if condi evaluates to false, the matching is resumed against the patterns following patterni.
The let and let rec constructs bind value names locally. The construct
evaluates expr1 ⊠exprn in some unspecified order and matches their values against the patterns pattern1 ⊠patternn. If the matchings succeed, expr is evaluated in the environment enriched by the bindings performed during matching, and the value of expr is returned as the value of the whole let expression. If one of the matchings fails, the exception Match_failure is raised.
An alternate syntax is provided to bind variables to functional values: instead of writing
in a let expression, one may instead write
Recursive definitions of names are introduced by let rec:
The only difference with the let construct described above is that the bindings of names to values performed by the pattern-matching are considered already performed when the expressions expr1 to exprn are evaluated. That is, the expressions expr1 to exprn can reference identifiers that are bound by one of the patterns pattern1, âŠ, patternn, and expect them to have the same value as in expr, the body of the let rec construct.
The recursive definition is guaranteed to behave as described above if the expressions expr1 to exprn are function definitions (fun ⊠or function âŠ), and the patterns pattern1 ⊠patternn are just value names, as in:
This defines name1 ⊠namen as mutually recursive functions local to expr.
The behavior of other forms of let rec definitions is implementation-dependent. The current implementation also supports a certain class of recursive definitions of non-functional values, as explained in sectionââ12.1.
(Introduced in OCaml 4.04)
It is possible to define local exceptions in expressions: let exception constr-decl in expr .
The syntactic scope of the exception constructor is the inner expression, but nothing prevents exception values created with this constructor from escaping this scope. Two executions of the definition above result in two incompatible exception constructors (as for any exception definition). For instance:
(Introduced in OCaml 3.12)
Polymorphic type annotations in let-definitions behave in a way similar to polymorphic methods:
These annotations explicitly require the defined value to be polymorphic, and allow one to use this polymorphism in recursive occurrences (when using let rec). Note however that this is a normal polymorphic type, unifiable with any instance of itself.
The expression expr1 ; expr2 evaluates expr1 first, then expr2, and returns the value of expr2.
The expression if expr1 then expr2 else expr3 evaluates to the value of expr2 if expr1 evaluates to the boolean true, and to the value of expr3 if expr1 evaluates to the boolean false.
The else expr3 part can be omitted, in which case it defaults to else ().
The expression
|
matches the value of expr against the patterns pattern1 to patternn. If the matching against patterni succeeds, the associated expression expri is evaluated, and its value becomes the value of the whole match expression. The evaluation of expri takes place in an environment enriched by the bindings performed during matching. If several patterns match the value of expr, the one that occurs first in the match expression is selected.
If none of the patterns match the value of expr, the exception Match_failure is raised.
The expression expr1 && expr2 evaluates to true if both expr1 and expr2 evaluate to true; otherwise, it evaluates to false. The first component, expr1, is evaluated first. The second component, expr2, is not evaluated if the first component evaluates to false. Hence, the expression expr1 && expr2 behaves exactly as
The expression expr1 || expr2 evaluates to true if one of the expressions expr1 and expr2 evaluates to true; otherwise, it evaluates to false. The first component, expr1, is evaluated first. The second component, expr2, is not evaluated if the first component evaluates to true. Hence, the expression expr1 || expr2 behaves exactly as
The boolean operators & and or are deprecated synonyms for (respectively) && and ||.
The expression while expr1 do expr2 done repeatedly evaluates expr2 while expr1 evaluates to true. The loop condition expr1 is evaluated and tested at the beginning of each iteration. The whole while ⊠done expression evaluates to the unit value ().
The expression for name = expr1 to expr2 do expr3 done first evaluates the expressions expr1 and expr2 (the boundaries) into integer values n and p. Then, the loop body expr3 is repeatedly evaluated in an environment where name is successively bound to the values n, n+1, âŠ, pâ1, p. The loop body is never evaluated if n > p.
The expression for name = expr1 downto expr2 do expr3 done evaluates similarly, except that name is successively bound to the values n, nâ1, âŠ, p+1, p. The loop body is never evaluated if n < p.
In both cases, the whole for expression evaluates to the unit value ().
The expression
|
evaluates the expression expr and returns its value if the evaluation of expr does not raise any exception. If the evaluation of expr raises an exception, the exception value is matched against the patterns pattern1 to patternn. If the matching against patterni succeeds, the associated expression expri is evaluated, and its value becomes the value of the whole try expression. The evaluation of expri takes place in an environment enriched by the bindings performed during matching. If several patterns match the value of expr, the one that occurs first in the try expression is selected. If none of the patterns matches the value of expr, the exception value is raised again, thereby transparently âpassing throughâ the try construct.
The expression expr1 , ⊠, exprn evaluates to the n-tuple of the values of expressions expr1 to exprn. The evaluation order of the subexpressions is not specified.
The expression constr expr evaluates to the unary variant value whose constructor is constr, and whose argument is the value of expr. Similarly, the expression constr ( expr1 , ⊠, exprn ) evaluates to the n-ary variant value whose constructor is constr and whose arguments are the values of expr1, âŠ, exprn.
The expression constr (expr1, âŠ, exprn) evaluates to the variant value whose constructor is constr, and whose arguments are the values of expr1 ⊠exprn.
For lists, some syntactic sugar is provided. The expression expr1 :: expr2 stands for the constructor ( :: ) applied to the arguments ( expr1 , expr2 ), and therefore evaluates to the list whose head is the value of expr1 and whose tail is the value of expr2. The expression [ expr1 ; ⊠; exprn ] is equivalent to expr1 :: ⊠:: exprn :: [], and therefore evaluates to the list whose elements are the values of expr1 to exprn.
The expression `tag-name expr evaluates to the polymorphic variant value whose tag is tag-name, and whose argument is the value of expr.
The expression { field1 [= expr1] ; ⊠; fieldn [= exprn ]} evaluates to the record value { field1 = v1; âŠ; fieldn = vn } where vi is the value of expri for i = 1,⊠, n. A single identifier fieldk stands for fieldk = fieldk, and a qualified identifier module-path . fieldk stands for module-path . fieldk = fieldk. The fields field1 to fieldn must all belong to the same record type; each field of this record type must appear exactly once in the record expression, though they can appear in any order. The order in which expr1 to exprn are evaluated is not specified. Optional type constraints can be added after each field { field1 : typexpr1 = expr1 ;⊠; fieldn : typexprn = exprn } to force the type of fieldk to be compatible with typexprk.
The expression { expr with field1 [= expr1] ; ⊠; fieldn [= exprn] } builds a fresh record with fields field1 ⊠fieldn equal to expr1 ⊠exprn, and all other fields having the same value as in the record expr. In other terms, it returns a shallow copy of the record expr, except for the fields field1 ⊠fieldn, which are initialized to expr1 ⊠exprn. As previously, single identifier fieldk stands for fieldk = fieldk, a qualified identifier module-path . fieldk stands for module-path . fieldk = fieldk and it is possible to add an optional type constraint on each field being updated with { expr with field1 : typexpr1 = expr1 ; ⊠; fieldn : typexprn = exprn }.
The expression expr1 . field evaluates expr1 to a record value, and returns the value associated to field in this record value.
The expression expr1 . field <- expr2 evaluates expr1 to a record value, which is then modified in-place by replacing the value associated to field in this record by the value of expr2. This operation is permitted only if field has been declared mutable in the definition of the record type. The whole expression expr1 . field <- expr2 evaluates to the unit value ().
The expression [| expr1 ; ⊠; exprn |] evaluates to a n-element array, whose elements are initialized with the values of expr1 to exprn respectively. The order in which these expressions are evaluated is unspecified.
The expression expr1 .( expr2 ) returns the value of element number expr2 in the array denoted by expr1. The first element has number 0; the last element has number nâ1, where n is the size of the array. The exception Invalid_argument is raised if the access is out of bounds.
The expression expr1 .( expr2 ) <- expr3 modifies in-place the array denoted by expr1, replacing element number expr2 by the value of expr3. The exception Invalid_argument is raised if the access is out of bounds. The value of the whole expression is ().
The expression expr1 .[ expr2 ] returns the value of character number expr2 in the string denoted by expr1. The first character has number 0; the last character has number nâ1, where n is the length of the string. The exception Invalid_argument is raised if the access is out of bounds.
The expression expr1 .[ expr2 ] <- expr3 modifies in-place the string denoted by expr1, replacing character number expr2 by the value of expr3. The exception Invalid_argument is raised if the access is out of bounds. The value of the whole expression is (). Note: this possibility is offered only for backward compatibility with older versions of OCaml and will be removed in a future version. New code should use byte sequences and the Bytes.set function.
Symbols from the class infix-symbol, as well as the keywords *, +, -, -., =, !=, <, >, or, ||, &, &&, :=, mod, land, lor, lxor, lsl, lsr, and asr can appear in infix position (between two expressions). Symbols from the class prefix-symbol, as well as the keywords - and -. can appear in prefix position (in front of an expression).
Infix and prefix symbols do not have a fixed meaning: they are simply interpreted as applications of functions bound to the names corresponding to the symbols. The expression prefix-symbol expr is interpreted as the application ( prefix-symbol ) expr. Similarly, the expression expr1 infix-symbol expr2 is interpreted as the application ( infix-symbol ) expr1 expr2.
The table below lists the symbols defined in the initial environment and their initial meaning. (See the description of the core library module Stdlib in chapterââ27 for more details). Their meaning may be changed at any time using let ( infix-op ) name1 name2 = âŠ
Note: the operators &&, ||, and ~- are handled specially and it is not advisable to change their meaning.
The keywords - and -. can appear both as infix and prefix operators. When they appear as prefix operators, they are interpreted respectively as the functions (~-) and (~-.).
Operator | Initial meaning |
+ | Integer addition. |
- (infix) | Integer subtraction. |
~- - (prefix) | Integer negation. |
* | Integer multiplication. |
/ | Integer division. Raise Division_by_zero if second argument is zero. |
mod | Integer modulus. Raise Division_by_zero if second argument is zero. |
land | Bitwise logical âandâ on integers. |
lor | Bitwise logical âorâ on integers. |
lxor | Bitwise logical âexclusive orâ on integers. |
lsl | Bitwise logical shift left on integers. |
lsr | Bitwise logical shift right on integers. |
asr | Bitwise arithmetic shift right on integers. |
+. | Floating-point addition. |
-. (infix) | Floating-point subtraction. |
~-. -. (prefix) | Floating-point negation. |
*. | Floating-point multiplication. |
/. | Floating-point division. |
** | Floating-point exponentiation. |
@ | List concatenation. |
^ | String concatenation. |
! | Dereferencing (return the current contents of a reference). |
:= | Reference assignment (update the reference given as first argument with the value of the second argument). |
= | Structural equality test. |
<> | Structural inequality test. |
== | Physical equality test. |
!= | Physical inequality test. |
< | Test âless thanâ. |
<= | Test âless than or equalâ. |
> | Test âgreater thanâ. |
>= | Test âgreater than or equalâ. |
&& & | Boolean conjunction. |
|| or | Boolean disjunction. |
When class-path evaluates to a class body, new class-path evaluates to a new object containing the instance variables and methods of this class.
When class-path evaluates to a class function, new class-path evaluates to a function expecting the same number of arguments and returning a new object of this class.
Creating directly an object through the object class-body end construct is operationally equivalent to defining locally a class class-name = object class-body end âsee sections 11.9.2 and following for the syntax of class-bodyâ and immediately creating a single object from it by new class-name.
The typing of immediate objects is slightly different from explicitly defining a class in two respects. First, the inferred object type may contain free type variables. Second, since the class body of an immediate object will never be extended, its self type can be unified with a closed object type.
The expression expr # method-name invokes the method method-name of the object denoted by expr.
If method-name is a polymorphic method, its type should be known at the invocation site. This is true for instance if expr is the name of a fresh object (let ident = new class-path ⊠) or if there is a type constraint. Principality of the derivation can be checked in the -principal mode.
The instance variables of a class are visible only in the body of the methods defined in the same class or a class that inherits from the class defining the instance variables. The expression inst-var-name evaluates to the value of the given instance variable. The expression inst-var-name <- expr assigns the value of expr to the instance variable inst-var-name, which must be mutable. The whole expression inst-var-name <- expr evaluates to ().
An object can be duplicated using the library function Oo.copy (see module Oo). Inside a method, the expression {< [inst-var-name [= expr] { ; inst-var-name [= expr] }] >} returns a copy of self with the given instance variables replaced by the values of the associated expressions. A single instance variable name id stands for id = id. Other instance variables have the same value in the returned object as in self.
Expressions whose type contains object or polymorphic variant types can be explicitly coerced (weakened) to a supertype. The expression (expr :> typexpr) coerces the expression expr to type typexpr. The expression (expr : typexpr1 :> typexpr2) coerces the expression expr from type typexpr1 to type typexpr2.
The former operator will sometimes fail to coerce an expression expr from a type typ1 to a type typ2 even if type typ1 is a subtype of type typ2: in the current implementation it only expands two levels of type abbreviations containing objects and/or polymorphic variants, keeping only recursion when it is explicit in the class type (for objects). As an exception to the above algorithm, if both the inferred type of expr and typ are ground (i.e. do not contain type variables), the former operator behaves as the latter one, taking the inferred type of expr as typ1. In case of failure with the former operator, the latter one should be used.
It is only possible to coerce an expression expr from type typ1 to type typ2, if the type of expr is an instance of typ1 (like for a type annotation), and typ1 is a subtype of typ2. The type of the coerced expression is an instance of typ2. If the types contain variables, they may be instantiated by the subtyping algorithm, but this is only done after determining whether typ1 is a potential subtype of typ2. This means that typing may fail during this latter unification step, even if some instance of typ1 is a subtype of some instance of typ2. In the following paragraphs we describe the subtyping relation used.
A fixed object type admits as subtype any object type that includes all its methods. The types of the methods shall be subtypes of those in the supertype. Namely,
is a supertype of
which may contain an ellipsis .. if every typi is a supertype of the corresponding typâČi.
A monomorphic method type can be a supertype of a polymorphic method type. Namely, if typ is an instance of typâČ, then 'a1 ⊠'an . typâČ is a subtype of typ.
Inside a class definition, newly defined types are not available for subtyping, as the type abbreviations are not yet completely defined. There is an exception for coercing self to the (exact) type of its class: this is allowed if the type of self does not appear in a contravariant position in the class type, i.e. if there are no binary methods.
A polymorphic variant type typ is a subtype of another polymorphic variant type typâČ if the upper bound of typ (i.e. the maximum set of constructors that may appear in an instance of typ) is included in the lower bound of typâČ, and the types of arguments for the constructors of typ are subtypes of those in typâČ. Namely,
which may be a shrinkable type, is a subtype of
which may be an extensible type, if every typi is a subtype of typâČi.
Other types do not introduce new subtyping, but they may propagate the subtyping of their arguments. For instance, typ1 * typ2 is a subtype of typâČ1 * typâČ2 when typ1 and typ2 are respectively subtypes of typâČ1 and typâČ2. For function types, the relation is more subtle: typ1 -> typ2 is a subtype of typâČ1ââ-> typâČ2 if typ1 is a supertype of typâČ1 and typ2 is a subtype of typâČ2. For this reason, function types are covariant in their second argument (like tuples), but contravariant in their first argument. Mutable types, like array or ref are neither covariant nor contravariant, they are nonvariant, that is they do not propagate subtyping.
For user-defined types, the variance is automatically inferred: a parameter is covariant if it has only covariant occurrences, contravariant if it has only contravariant occurrences, variance-free if it has no occurrences, and nonvariant otherwise. A variance-free parameter may change freely through subtyping, it does not have to be a subtype or a supertype. For abstract and private types, the variance must be given explicitly (see sectionââ11.8.1), otherwise the default is nonvariant. This is also the case for constrained arguments in type definitions.
OCaml supports the assert construct to check debugging assertions. The expression assert expr evaluates the expression expr and returns () if expr evaluates to true. If it evaluates to false the exception Assert_failure is raised with the source file name and the location of expr as arguments. Assertion checking can be turned off with the -noassert compiler option. In this case, expr is not evaluated at all.
As a special case, assert false is reduced to raise (Assert_failure ...), which gives it a polymorphic type. This means that it can be used in place of any expression (for example as a branch of any pattern-matching). It also means that the assert false âassertionsâ cannot be turned off by the -noassert option.
The expression lazy expr returns a value v of type Lazy.t that encapsulates the computation of expr. The argument expr is not evaluated at this point in the program. Instead, its evaluation will be performed the first time the function Lazy.force is applied to the value v, returning the actual value of expr. Subsequent applications of Lazy.force to v do not evaluate expr again. Applications of Lazy.force may be implicit through pattern matching (seeââ11.6).
The expression let module module-name = module-expr in expr locally binds the module expression module-expr to the identifier module-name during the evaluation of the expression expr. It then returns the value of expr. For example:
The expressions let open module-path in expr and module-path.(expr) are strictly equivalent. These constructions locally open the module referred to by the module path module-path in the respective scope of the expression expr.
When the body of a local open expression is delimited by [ ], [| |], or { }, the parentheses can be omitted. For expression, parentheses can also be omitted for {< >}. For example, module-path.[expr] is equivalent to module-path.([expr]), and module-path.[| expr |] is equivalent to module-path.([| expr |]).
Type definitions bind type constructors to data types: either variant types, record types, type abbreviations, or abstract data types. They also bind the value constructors and record fields associated with the definition.
|
See also the following language extensions: private types, generalized algebraic datatypes, attributes, extension nodes, extensible variant types and inline records.
Type definitions are introduced by the type keyword, and consist in one or several simple definitions, possibly mutually recursive, separated by the and keyword. Each simple definition defines one type constructor.
A simple definition consists in a lowercase identifier, possibly preceded by one or several type parameters, and followed by an optional type equation, then an optional type representation, and then a constraint clause. The identifier is the name of the type constructor being defined.
type colour = | Red | Green | Blue | Yellow | Black | White | RGB of {r : int; g : int; b : int} type 'a tree = Lf | Br of 'a * 'a tree * 'a;; type t = {decoration : string; substance : t'} and t' = Int of int | List of t list
In the right-hand side of type definitions, references to one of the type constructor name being defined are considered as recursive, unless type is followed by nonrec. The nonrec keyword was introduced in OCaml 4.02.2.
The optional type parameters are either one type variable ' ident, for type constructors with one parameter, or a list of type variables ('ident1,âŠ,'identn), for type constructors with several parameters. Each type parameter may be prefixed by a variance constraint + (resp. -) indicating that the parameter is covariant (resp. contravariant), and an injectivity annotation ! indicating that the parameter can be deduced from the whole type. These type parameters can appear in the type expressions of the right-hand side of the definition, optionally restricted by a variance constraint ; i.e. a covariant parameter may only appear on the right side of a functional arrow (more precisely, follow the left branch of an even number of arrows), and a contravariant parameter only the left side (left branch of an odd number of arrows). If the type has a representation or an equation, and the parameter is free (i.e. not bound via a type constraint to a constructed type), its variance constraint is checked but subtyping etc. will use the inferred variance of the parameter, which may be less restrictive; otherwise (i.e. for abstract types or non-free parameters), the variance must be given explicitly, and the parameter is invariant if no variance is given.
The optional type equation = typexpr makes the defined type equivalent to the type expression typexpr: one can be substituted for the other during typing. If no type equation is given, a new type is generated: the defined type is incompatible with any other type.
The optional type representation describes the data structure representing the defined type, by giving the list of associated constructors (if it is a variant type) or associated fields (if it is a record type). If no type representation is given, nothing is assumed on the structure of the type besides what is stated in the optional type equation.
The type representation = [|] constr-decl { | constr-decl } describes a variant type. The constructor declarations constr-decl1, âŠ, constr-decln describe the constructors associated to this variant type. The constructor declaration constr-name of typexpr1 * ⊠* typexprn declares the name constr-name as a non-constant constructor, whose arguments have types typexpr1 âŠtypexprn. The constructor declaration constr-name declares the name constr-name as a constant constructor. Constructor names must be capitalized.
The type representation = { field-decl { ; field-decl } [;] } describes a record type. The field declarations field-decl1, âŠ, field-decln describe the fields associated to this record type. The field declaration field-name : poly-typexpr declares field-name as a field whose argument has type poly-typexpr. The field declaration mutable field-name : poly-typexpr behaves similarly; in addition, it allows physical modification of this field. Immutable fields are covariant, mutable fields are non-variant. Both mutable and immutable fields may have explicitly polymorphic types. The polymorphism of the contents is statically checked whenever a record value is created or modified. Extracted values may have their types instantiated.
The two components of a type definition, the optional equation and the optional representation, can be combined independently, giving rise to four typical situations:
The type variables appearing as type parameters can optionally be prefixed by + or - to indicate that the type constructor is covariant or contravariant with respect to this parameter. This variance information is used to decide subtyping relations when checking the validity of :> coercions (see section 11.7.7).
For instance, type +'a t declares t as an abstract type that is covariant in its parameter; this means that if the type Ï is a subtype of the type Ï, then Ï t is a subtype of Ï t. Similarly, type -'a t declares that the abstract type t is contravariant in its parameter: if Ï is a subtype of Ï, then Ï t is a subtype of Ï t. If no + or - variance annotation is given, the type constructor is assumed non-variant in the corresponding parameter. For instance, the abstract type declaration type 'a t means that Ï t is neither a subtype nor a supertype of Ï t if Ï is subtype of Ï.
The variance indicated by the + and - annotations on parameters is enforced only for abstract and private types, or when there are type constraints. Otherwise, for abbreviations, variant and record types without type constraints, the variance properties of the type constructor are inferred from its definition, and the variance annotations are only checked for conformance with the definition.
Injectivity annotations are only necessary for abstract types and private row types, since they can otherwise be deduced from the type declaration: all parameters are injective for record and variant type declarations (including extensible types); for type abbreviations a parameter is injective if it has an injective occurrence in its defining equation (be it private or not). For constrained type parameters in type abbreviations, they are injective if either they appear at an injective position in the body, or if all their type variables are injective; in particular, if a constrained type parameter contains a variable that doesnât appear in the body, it cannot be injective.
The construct constraint ' ident = typexpr allows the specification of type parameters. Any actual type argument corresponding to the type parameter ident has to be an instance of typexpr (more precisely, ident and typexpr are unified). Type variables of typexpr can appear in the type equation and the type declaration.
|
Exception definitions add new constructors to the built-in variant
type exn
of exception values. The constructors are declared as
for a definition of a variant type.
The form exception constr-decl generates a new exception, distinct from all other exceptions in the system. The form exception constr-name = constr gives an alternate name to an existing exception.
Classes are defined using a small language, similar to the module language.
Class types are the class-level equivalent of type expressions: they specify the general shape and type properties of classes.
|
See also the following language extensions: attributes and extension nodes.
The expression classtype-path is equivalent to the class type bound to the name classtype-path. Similarly, the expression [ typexpr1 , ⊠typexprn ] classtype-path is equivalent to the parametric class type bound to the name classtype-path, in which type parameters have been instantiated to respectively typexpr1, âŠtypexprn.
The class type expression typexpr -> class-type is the type of class functions (functions from values to classes) that take as argument a value of type typexpr and return as result a class of type class-type.
The class type expression object [( typexpr )] { class-field-spec } end is the type of a class body. It specifies its instance variables and methods. In this type, typexpr is matched against the self type, therefore providing a name for the self type.
A class body will match a class body type if it provides definitions for all the components specified in the class body type, and these definitions meet the type requirements given in the class body type. Furthermore, all methods either virtual or public present in the class body must also be present in the class body type (on the other hand, some instance variables and concrete private methods may be omitted). A virtual method will match a concrete method, which makes it possible to forget its implementation. An immutable instance variable will match a mutable instance variable.
Local opens are supported in class types since OCaml 4.06.
The inheritance construct inherit class-body-type provides for inclusion of methods and instance variables from other class types. The instance variable and method types from class-body-type are added into the current class type.
A specification of an instance variable is written val [mutable] [virtual] inst-var-name : typexpr, where inst-var-name is the name of the instance variable and typexpr its expected type. The flag mutable indicates whether this instance variable can be physically modified. The flag virtual indicates that this instance variable is not initialized. It can be initialized later through inheritance.
An instance variable specification will hide any previous specification of an instance variable of the same name.
The specification of a method is written method [private] method-name : poly-typexpr, where method-name is the name of the method and poly-typexpr its expected type, possibly polymorphic. The flag private indicates that the method cannot be accessed from outside the object.
The polymorphism may be left implicit in public method specifications: any type variable which is not bound to a class parameter and does not appear elsewhere inside the class specification will be assumed to be universal, and made polymorphic in the resulting method type. Writing an explicit polymorphic type will disable this behaviour.
If several specifications are present for the same method, they must have compatible types. Any non-private specification of a method forces it to be public.
A virtual method specification is written method [private] virtual method-name : poly-typexpr, where method-name is the name of the method and poly-typexpr its expected type.
The construct constraint typexpr1 = typexpr2 forces the two type expressions to be equal. This is typically used to specify type parameters: in this way, they can be bound to specific type expressions.
Class expressions are the class-level equivalent of value expressions: they evaluate to classes, thus providing implementations for the specifications expressed in class types.
|
|
See also the following language extensions: locally abstract types, attributes and extension nodes.
The expression class-path evaluates to the class bound to the name class-path. Similarly, the expression [ typexpr1 , ⊠typexprn ] class-path evaluates to the parametric class bound to the name class-path, in which type parameters have been instantiated respectively to typexpr1, âŠtypexprn.
The expression ( class-expr ) evaluates to the same module as class-expr.
The expression ( class-expr : class-type ) checks that class-type matches the type of class-expr (that is, that the implementation class-expr meets the type specification class-type). The whole expression evaluates to the same class as class-expr, except that all components not specified in class-type are hidden and can no longer be accessed.
Class application is denoted by juxtaposition of (possibly labeled) expressions. It denotes the class whose constructor is the first expression applied to the given arguments. The arguments are evaluated as for expression application, but the constructor itself will only be evaluated when objects are created. In particular, side-effects caused by the application of the constructor will only occur at object creation time.
The expression fun [[?]label-name:]pattern -> class-expr evaluates to a function from values to classes. When this function is applied to a value v, this value is matched against the pattern pattern and the result is the result of the evaluation of class-expr in the extended environment.
Conversion from functions with default values to functions with patterns only works identically for class functions as for normal functions.
The expression
is a short form for
The let and let rec constructs bind value names locally, as for the core language expressions.
If a local definition occurs at the very beginning of a class definition, it will be evaluated when the class is created (just as if the definition was outside of the class). Otherwise, it will be evaluated when the object constructor is called.
Local opens are supported in class expressions since OCaml 4.06.
|
The expression object class-body end denotes a class body. This is the prototype for an object : it lists the instance variables and methods of an object of this class.
A class body is a class value: it is not evaluated at once. Rather, its components are evaluated each time an object is created.
In a class body, the pattern ( pattern [: typexpr] ) is matched against self, therefore providing a binding for self and self type. Self can only be used in method and initializers.
Self type cannot be a closed object type, so that the class remains extensible.
Since OCaml 4.01, it is an error if the same method or instance variable name is defined several times in the same class body.
The inheritance construct inherit class-expr allows reusing methods and instance variables from other classes. The class expression class-expr must evaluate to a class body. The instance variables, methods and initializers from this class body are added into the current class. The addition of a method will override any previously defined method of the same name.
An ancestor can be bound by appending as lowercase-ident to the inheritance construct. lowercase-ident is not a true variable and can only be used to select a method, i.e. in an expression lowercase-ident # method-name. This gives access to the method method-name as it was defined in the parent class even if it is redefined in the current class. The scope of this ancestor binding is limited to the current class. The ancestor method may be called from a subclass but only indirectly.
The definition val [mutable] inst-var-name = expr adds an instance variable inst-var-name whose initial value is the value of expression expr. The flag mutable allows physical modification of this variable by methods.
An instance variable can only be used in the methods and initializers that follow its definition.
Since version 3.10, redefinitions of a visible instance variable with the same name do not create a new variable, but are merged, using the last value for initialization. They must have identical types and mutability. However, if an instance variable is hidden by omitting it from an interface, it will be kept distinct from other instance variables with the same name.
A variable specification is written val [mutable] virtual inst-var-name : typexpr. It specifies whether the variable is modifiable, and gives its type.
Virtual instance variables were added in version 3.10.
A method definition is written method method-name = expr. The definition of a method overrides any previous definition of this method. The method will be public (that is, not private) if any of the definition states so.
A private method, method private method-name = expr, is a method that can only be invoked on self (from other methods of the same object, defined in this class or one of its subclasses). This invocation is performed using the expression value-name # method-name, where value-name is directly bound to self at the beginning of the class definition. Private methods do not appear in object types. A method may have both public and private definitions, but as soon as there is a public one, all subsequent definitions will be made public.
Methods may have an explicitly polymorphic type, allowing them to be used polymorphically in programs (even for the same object). The explicit declaration may be done in one of three ways: (1) by giving an explicit polymorphic type in the method definition, immediately after the method name, i.e. method [private] method-name : { ' ident }+ . typexpr = expr; (2) by a forward declaration of the explicit polymorphic type through a virtual method definition; (3) by importing such a declaration through inheritance and/or constraining the type of self.
Some special expressions are available in method bodies for manipulating instance variables and duplicating self:
|
The expression inst-var-name <- expr modifies in-place the current object by replacing the value associated to inst-var-name by the value of expr. Of course, this instance variable must have been declared mutable.
The expression {< inst-var-name1 = expr1 ; ⊠; inst-var-namen = exprn >} evaluates to a copy of the current object in which the values of instance variables inst-var-name1, âŠ, inst-var-namen have been replaced by the values of the corresponding expressions expr1, âŠ, exprn.
A method specification is written method [private] virtual method-name : poly-typexpr. It specifies whether the method is public or private, and gives its type. If the method is intended to be polymorphic, the type must be explicitly polymorphic.
Since Ocaml 3.12, the keywords inherit!, val! and method! have the same semantics as inherit, val and method, but they additionally require the definition they introduce to be overriding. Namely, method! requires method-name to be already defined in this class, val! requires inst-var-name to be already defined in this class, and inherit! requires class-expr to override some definitions. If no such overriding occurs, an error is signaled.
As a side-effect, these 3 keywords avoid the warningsââ7 (method override) andââ13 (instance variable override). Note that warningââ7 is disabled by default.
The construct constraint typexpr1 = typexpr2 forces the two type expressions to be equals. This is typically used to specify type parameters: in that way they can be bound to specific type expressions.
A class initializer initializer expr specifies an expression that will be evaluated whenever an object is created from the class, once all its instance variables have been initialized.
|
A class definition class class-binding { and class-binding } is recursive. Each class-binding defines a class-name that can be used in the whole expression except for inheritance. It can also be used for inheritance, but only in the definitions that follow its own.
A class binding binds the class name class-name to the value of expression class-expr. It also binds the class type class-name to the type of the class, and defines two type abbreviations : class-name and # class-name. The first one is the type of objects of this class, while the second is more general as it unifies with the type of any object belonging to a subclass (see sectionââ11.4).
A class must be flagged virtual if one of its methods is virtual (that is, appears in the class type, but is not actually defined). Objects cannot be created from a virtual class.
The class type parameters correspond to the ones of the class type and of the two type abbreviations defined by the class binding. They must be bound to actual types in the class definition using type constraints. So that the abbreviations are well-formed, type variables of the inferred type of the class must either be type parameters or be bound in the constraint clause.
|
This is the counterpart in signatures of class definitions. A class specification matches a class definition if they have the same type parameters and their types match.
|
A class type definition class class-name = class-body-type defines an abbreviation class-name for the class body type class-body-type. As for class definitions, two type abbreviations class-name and # class-name are also defined. The definition can be parameterized by some type parameters. If any method in the class type body is virtual, the definition must be flagged virtual.
Two class type definitions match if they have the same type parameters and they expand to matching types.
Module types are the module-level equivalent of type expressions: they specify the general shape and type properties of modules.
|
|
See also the following language extensions: recovering the type of a module, substitution inside a signature, type-level module aliases, attributes, extension nodes, generative functors, and module type substitutions.
The expression modtype-path is equivalent to the module type bound to the name modtype-path. The expression ( module-type ) denotes the same type as module-type.
Signatures are type specifications for structures. Signatures sig ⊠end are collections of type specifications for value names, type names, exceptions, module names and module type names. A structure will match a signature if the structure provides definitions (implementations) for all the names specified in the signature (and possibly more), and these definitions meet the type requirements given in the signature.
An optional ;; is allowed after each specification in a signature. It serves as a syntactic separator with no semantic meaning.
A specification of a value component in a signature is written val value-name : typexpr, where value-name is the name of the value and typexpr its expected type.
The form external value-name : typexpr = external-declaration is similar, except that it requires in addition the name to be implemented as the external function specified in external-declaration (see chapterââ22).
A specification of one or several type components in a signature is written type typedef { and typedef } and consists of a sequence of mutually recursive definitions of type names.
Each type definition in the signature specifies an optional type equation = typexpr and an optional type representation = constr-decl ⊠or = { field-decl ⊠}. The implementation of the type name in a matching structure must be compatible with the type expression specified in the equation (if given), and have the specified representation (if given). Conversely, users of that signature will be able to rely on the type equation or type representation, if given. More precisely, we have the following four situations:
The specification exception constr-decl in a signature requires the matching structure to provide an exception with the name and arguments specified in the definition, and makes the exception available to all users of the structure.
A specification of one or several classes in a signature is written class class-spec { and class-spec } and consists of a sequence of mutually recursive definitions of class names.
Class specifications are described more precisely in sectionââ11.9.4.
A specification of one or several class types in a signature is written class type classtype-def { and classtype-def } and consists of a sequence of mutually recursive definitions of class type names. Class type specifications are described more precisely in sectionââ11.9.5.
A specification of a module component in a signature is written module module-name : module-type, where module-name is the name of the module component and module-type its expected type. Modules can be nested arbitrarily; in particular, functors can appear as components of structures and functor types as components of signatures.
For specifying a module component that is a functor, one may write
instead of
A module type component of a signature can be specified either as a manifest module type or as an abstract module type.
An abstract module type specification module type modtype-name allows the name modtype-name to be implemented by any module type in a matching signature, but hides the implementation of the module type to all users of the signature.
A manifest module type specification module type modtype-name = module-type requires the name modtype-name to be implemented by the module type module-type in a matching signature, but makes the equality between modtype-name and module-type apparent to all users of the signature.
The expression open module-path in a signature does not specify any components. It simply affects the parsing of the following items of the signature, allowing components of the module denoted by module-path to be referred to by their simple names name instead of path accesses module-path . name. The scope of the open stops at the end of the signature expression.
The expression include module-type in a signature performs textual inclusion of the components of the signature denoted by module-type. It behaves as if the components of the included signature were copied at the location of the include. The module-type argument must refer to a module type that is a signature, not a functor type.
The module type expression functor ( module-name : module-type1 ) -> module-type2 is the type of functors (functions from modules to modules) that take as argument a module of type module-type1 and return as result a module of type module-type2. The module type module-type2 can use the name module-name to refer to type components of the actual argument of the functor. If the type module-type2 does not depend on type components of module-name, the module type expression can be simplified with the alternative short syntax module-type1 -> module-type2 . No restrictions are placed on the type of the functor argument; in particular, a functor may take another functor as argument (âhigher-orderâ functor).
When the result module type is itself a functor,
one may use the abbreviated form
Assuming module-type denotes a signature, the expression module-type with mod-constraint { and mod-constraint } denotes the same signature where type equations have been added to some of the type specifications, as described by the constraints following the with keyword. The constraint type [type-parameters] typeconstr = typexpr adds the type equation = typexpr to the specification of the type component named typeconstr of the constrained signature. The constraint module module-path = extended-module-path adds type equations to all type components of the sub-structure denoted by module-path, making them equivalent to the corresponding type components of the structure denoted by extended-module-path.
For instance, if the module type name S is bound to the signature
sig type t module M: (sig type u end) end
then S with type t=int denotes the signature
sig type t=int module M: (sig type u end) end
and S with module M = N denotes the signature
sig type t module M: (sig type u=N.u end) end
A functor taking two arguments of type S that share their t component is written
functor (A: S) (B: S with type t = A.t) ...
Constraints are added left to right. After each constraint has been applied, the resulting signature must be a subtype of the signature before the constraint was applied. Thus, the with operator can only add information on the type components of a signature, but never remove information.
Module expressions are the module-level equivalent of value expressions: they evaluate to modules, thus providing implementations for the specifications expressed in module types.
|
See also the following language extensions: recursive modules, first-class modules, overriding in open statements, attributes, extension nodes and generative functors.
The expression module-path evaluates to the module bound to the name module-path.
The expression ( module-expr ) evaluates to the same module as module-expr.
The expression ( module-expr : module-type ) checks that the type of module-expr is a subtype of module-type, that is, that all components specified in module-type are implemented in module-expr, and their implementation meets the requirements given in module-type. In other terms, it checks that the implementation module-expr meets the type specification module-type. The whole expression evaluates to the same module as module-expr, except that all components not specified in module-type are hidden and can no longer be accessed.
Structures struct ⊠end are collections of definitions for value names, type names, exceptions, module names and module type names. The definitions are evaluated in the order in which they appear in the structure. The scopes of the bindings performed by the definitions extend to the end of the structure. As a consequence, a definition may refer to names bound by earlier definitions in the same structure.
For compatibility with toplevel phrases (chapterââ14), optional ;; are allowed after and before each definition in a structure. These ;; have no semantic meanings. Similarly, an expr preceded by ;; is allowed as a component of a structure. It is equivalent to let _ = expr, i.e. expr is evaluated for its side-effects but is not bound to any identifier. If expr is the first component of a structure, the preceding ;; can be omitted.
A value definition let [rec] let-binding { and let-binding } bind value names in the same way as a let ⊠in ⊠expression (see sectionââ11.7.2). The value names appearing in the left-hand sides of the bindings are bound to the corresponding values in the right-hand sides.
A value definition external value-name : typexpr = external-declaration implements value-name as the external function specified in external-declaration (see chapterââ22).
A definition of one or several type components is written type typedef { and typedef } and consists of a sequence of mutually recursive definitions of type names.
Exceptions are defined with the syntax exception constr-decl or exception constr-name = constr.
A definition of one or several classes is written class class-binding { and class-binding } and consists of a sequence of mutually recursive definitions of class names. Class definitions are described more precisely in sectionââ11.9.3.
A definition of one or several classes is written class type classtype-def { and classtype-def } and consists of a sequence of mutually recursive definitions of class type names. Class type definitions are described more precisely in sectionââ11.9.5.
The basic form for defining a module component is module module-name = module-expr, which evaluates module-expr and binds the result to the name module-name.
One can write
instead of
Another derived form is
which is equivalent to
A definition for a module type is written module type modtype-name = module-type. It binds the name modtype-name to the module type denoted by the expression module-type.
The expression open module-path in a structure does not define any components nor perform any bindings. It simply affects the parsing of the following items of the structure, allowing components of the module denoted by module-path to be referred to by their simple names name instead of path accesses module-path . name. The scope of the open stops at the end of the structure expression.
The expression include module-expr in a structure re-exports in the current structure all definitions of the structure denoted by module-expr. For instance, if you define a module S as below
defining the module B as
is equivalent to defining it as
The difference between open and include is that open simply provides short names for the components of the opened structure, without defining any components of the current structure, while include also adds definitions for the components of the included structure.
The expression functor ( module-name : module-type ) -> module-expr evaluates to a functor that takes as argument modules of the type module-type1, binds module-name to these modules, evaluates module-expr in the extended environment, and returns the resulting modules as results. No restrictions are placed on the type of the functor argument; in particular, a functor may take another functor as argument (âhigher-orderâ functor).
When the result module expression is itself a functor,
one may use the abbreviated form
The expression module-expr1 ( module-expr2 ) evaluates module-expr1 to a functor and module-expr2 to a module, and applies the former to the latter. The type of module-expr2 must match the type expected for the arguments of the functor module-expr1.
|
Compilation units bridge the module system and the separate compilation system. A compilation unit is composed of two parts: an interface and an implementation. The interface contains a sequence of specifications, just as the inside of a sig ⊠end signature expression. The implementation contains a sequence of definitions and expressions, just as the inside of a struct ⊠end module expression. A compilation unit also has a name unit-name, derived from the names of the files containing the interface and the implementation (see chapterââ13 for more details). A compilation unit behaves roughly as the module definition
A compilation unit can refer to other compilation units by their names, as if they were regular modules. For instance, if U is a compilation unit that defines a type t, other compilation units can refer to that type under the name U.t; they can also refer to U as a whole structure. Except for names of other compilation units, a unit interface or unit implementation must not have any other free variables. In other terms, the type-checking and compilation of an interface or implementation proceeds in the initial environment
where name1 ⊠namen are the names of the other compilation units available in the search path (see chapterââ13 for more details) and specification1 ⊠specificationn are their respective interfaces.
This chapter describes language extensions and convenience features that are implemented in OCaml, but not described in chapter 11.
(Introduced in Objective Caml 1.00)
As mentioned in sectionââ11.7.2, the let rec binding construct, in addition to the definition of recursive functions, also supports a certain class of recursive definitions of non-functional values, such as
which binds name1 to the cyclic list 1::2::1::2::âŠ, and name2 to the cyclic list 2::1::2::1::âŠInformally, the class of accepted definitions consists of those definitions where the defined names occur only inside function bodies or as argument to a data constructor.
More precisely, consider the expression:
It will be accepted if each one of expr1 ⊠exprn is statically constructive with respect to name1 ⊠namen, is not immediately linked to any of name1 ⊠namen, and is not an array constructor whose arguments have abstract type.
An expression e is said to be statically constructive with respect to the variables name1 ⊠namen if at least one of the following conditions is true:
An expression e is said to be immediately linked to the variable name in the following cases:
(Introduced in Objective Caml 3.07)
|
Recursive module definitions, introduced by the module rec âŠand ⊠construction, generalize regular module definitions module module-name = module-expr and module specifications module module-name : module-type by allowing the defining module-expr and the module-type to refer recursively to the module identifiers being defined. A typical example of a recursive module definition is:
It can be given the following specification:
This is an experimental extension of OCaml: the class of recursive definitions accepted, as well as its dynamic semantics are not final and subject to change in future releases.
Currently, the compiler requires that all dependency cycles between the recursively-defined module identifiers go through at least one âsafeâ module. A module is âsafeâ if all value definitions that it contains have function types typexpr1 -> typexpr2. Evaluation of a recursive module definition proceeds by building initial values for the safe modules involved, binding all (functional) values to fun _ -> raise Undefined_recursive_module. The defining module expressions are then evaluated, and the initial values for the safe modules are replaced by the values thus computed. If a function component of a safe module is applied during this computation (which corresponds to an ill-founded recursive definition), the Undefined_recursive_module exception is raised at runtime:
If there are no safe modules along a dependency cycle, an error is raised
Note that, in the specification case, the module-types must be parenthesized if they use the with mod-constraint construct.
Private type declarations in module signatures, of the form type t = private ..., enable libraries to reveal some, but not all aspects of the implementation of a type to clients of the library. In this respect, they strike a middle ground between abstract type declarations, where no information is revealed on the type implementation, and data type definitions and type abbreviations, where all aspects of the type implementation are publicized. Private type declarations come in three flavors: for variant and record types (sectionââ12.3.1), for type abbreviations (sectionââ12.3.2), and for row types (sectionââ12.3.3).
(Introduced in Objective Caml 3.07)
|
Values of a variant or record type declared private can be de-structured normally in pattern-matching or via the expr . field notation for record accesses. However, values of these types cannot be constructed directly by constructor application or record construction. Moreover, assignment on a mutable field of a private record type is not allowed.
The typical use of private types is in the export signature of a module, to ensure that construction of values of the private type always go through the functions provided by the module, while still allowing pattern-matching outside the defining module. For example:
Here, the private declaration ensures that in any value of type M.t, the argument to the B constructor is always a positive integer.
With respect to the variance of their parameters, private types are handled like abstract types. That is, if a private type has parameters, their variance is the one explicitly given by prefixing the parameter by a â+â or a â-â, it is invariant otherwise.
(Introduced in Objective Caml 3.11)
|
Unlike a regular type abbreviation, a private type abbreviation declares a type that is distinct from its implementation type typexpr. However, coercions from the type to typexpr are permitted. Moreover, the compiler âknowsâ the implementation type and can take advantage of this knowledge to perform type-directed optimizations.
The following example uses a private type abbreviation to define a module of nonnegative integers:
The type N.t is incompatible with int, ensuring that nonnegative integers and regular integers are not confused. However, if x has type N.t, the coercion (x :> int) is legal and returns the underlying integer, just like N.to_int x. Deep coercions are also supported: if l has type N.t list, the coercion (l :> int list) returns the list of underlying integers, like List.map N.to_int l but without copying the list l.
Note that the coercion ( expr :> typexpr ) is actually an abbreviated form, and will only work in presence of private abbreviations if neither the type of expr nor typexpr contain any type variables. If they do, you must use the full form ( expr : typexpr1 :> typexpr2 ) where typexpr1 is the expected type of expr. Concretely, this would be (x : N.t :> int) and (l : N.t list :> int list) for the above examples.
(Introduced in Objective Caml 3.09)
|
Private row types are type abbreviations where part of the structure of the type is left abstract. Concretely typexpr in the above should denote either an object type or a polymorphic variant type, with some possibility of refinement left. If the private declaration is used in an interface, the corresponding implementation may either provide a ground instance, or a refined private type.
This declaration does more than hiding the y method, it also makes the type c incompatible with any other closed object type, meaning that only o will be of type c. In that respect it behaves similarly to private record types. But private row types are more flexible with respect to incremental refinement. This feature can be used in combination with functors.
A polymorphic variant type [t], for example
can be refined in two ways. A definition [u] may add new field to [t], and the declaration
will keep those new fields abstract. Construction of values of type [u] is possible using the known variants of [t], but any pattern-matching will require a default case to handle the potential extra fields. Dually, a declaration [u] may restrict the fields of [t] through abstraction: the declaration
corresponds to private variant types. One cannot create a value of the private type [v], except using the constructors that are explicitly listed as present, (`A n) in this example; yet, when patter-matching on a [v], one should assume that any of the constructors of [t] could be present.
Similarly to abstract types, the variance of type parameters is not inferred, and must be given explicitly.
(Introduced in OCaml 3.12, short syntax added in 4.03)
|
The expression fun ( type typeconstr-name ) -> expr introduces a type constructor named typeconstr-name which is considered abstract in the scope of the sub-expression, but then replaced by a fresh type variable. Note that contrary to what the syntax could suggest, the expression fun ( type typeconstr-name ) -> expr itself does not suspend the evaluation of expr as a regular abstraction would. The syntax has been chosen to fit nicely in the context of function declarations, where it is generally used. It is possible to freely mix regular function parameters with pseudo type parameters, as in:
and even use the alternative syntax for declaring functions:
If several locally abstract types need to be introduced, it is possible to use the syntax fun ( type typeconstr-name1 ⊠typeconstr-namen ) -> expr as syntactic sugar for fun ( type typeconstr-name1 ) -> ⊠-> fun ( type typeconstr-namen ) -> expr. For instance,
This construction is useful because the type constructors it introduces can be used in places where a type variable is not allowed. For instance, one can use it to define an exception in a local module within a polymorphic function.
Here is another example:
It is also extremely useful for first-class modules (see sectionââ12.5) and generalized algebraic datatypes (GADTs: see sectionââ12.10).
(Introduced in OCaml 4.00)
|
The (type typeconstr-name) syntax construction by itself does not make polymorphic the type variable it introduces, but it can be combined with explicit polymorphic annotations where needed. The above rule is provided as syntactic sugar to make this easier:
is automatically expanded into
This syntax can be very useful when defining recursive functions involving GADTs, see the sectionââ12.10 for a more detailed explanation.
The same feature is provided for method definitions.
(Introduced in OCaml 3.12; pattern syntax and package type inference introduced in 4.00; structural comparison of package types introduced in 4.02.; fewer parens required starting from 4.05)
|
Modules are typically thought of as static components. This extension makes it possible to pack a module as a first-class value, which can later be dynamically unpacked into a module.
The expression ( module module-expr : package-type ) converts the module (structure or functor) denoted by module expression module-expr to a value of the core language that encapsulates this module. The type of this core language value is ( module package-type ). The package-type annotation can be omitted if it can be inferred from the context.
Conversely, the module expression ( val expr : package-type ) evaluates the core language expression expr to a value, which must have type module package-type, and extracts the module that was encapsulated in this value. Again package-type can be omitted if the type of expr is known. If the module expression is already parenthesized, like the arguments of functors are, no additional parens are needed: Map.Make(val key).
The pattern ( module module-name : package-type ) matches a package with type package-type and binds it to module-name. It is not allowed in toplevel let bindings. Again package-type can be omitted if it can be inferred from the enclosing pattern.
The package-type syntactic class appearing in the ( module package-type ) type expression and in the annotated forms represents a subset of module types. This subset consists of named module types with optional constraints of a limited form: only non-parametrized types can be specified.
For type-checking purposes (and starting from OCaml 4.02), package types are compared using the structural comparison of module types.
In general, the module expression ( val expr : package-type ) cannot be used in the body of a functor, because this could cause unsoundness in conjunction with applicative functors. Since OCaml 4.02, this is relaxed in two ways: if package-type does not contain nominal type declarations (i.e. types that are created with a proper identity), then this expression can be used anywhere, and even if it contains such types it can be used inside the body of a generative functor, described in sectionââ12.15. It can also be used anywhere in the context of a local module binding let module module-name = ( val expr1 : package-type ) in expr2.
A typical use of first-class modules is to select at run-time among several implementations of a signature. Each implementation is a structure that we can encapsulate as a first-class module, then store in a data structure such as a hash table:
We can then select one implementation based on command-line arguments, for instance:
Alternatively, the selection can be performed within a function:
With first-class modules, it is possible to parametrize some code over the implementation of a module without using a functor.
To use this function, one can wrap the Set.Make functor:
(Introduced in OCaml 3.12)
|
The construction module type of module-expr expands to the module type (signature or functor type) inferred for the module expression module-expr. To make this module type reusable in many situations, it is intentionally not strengthened: abstract types and datatypes are not explicitly related with the types of the original module. For the same reason, module aliases in the inferred type are expanded.
A typical use, in conjunction with the signature-level include construct, is to extend the signature of an existing structure. In that case, one wants to keep the types equal to types in the original module. This can done using the following idiom.
The signature MYHASH then contains all the fields of the signature of the module Hashtbl (with strengthened type definitions), plus the new field replace. An implementation of this signature can be obtained easily by using the include construct again, but this time at the structure level:
Another application where the absence of strengthening comes handy, is to provide an alternative implementation for an existing module.
This idiom guarantees that Myset is compatible with Set, but allows it to represent sets internally in a different way.
(Introduced in OCaml 3.12, generalized in 4.06)
|
A âdestructiveâ substitution (with ... := ...) behaves essentially like normal signature constraints (with ... = ...), but it additionally removes the redefined type or module from the signature.
Prior to OCaml 4.06, there were a number of restrictions: one could only remove types and modules at the outermost level (not inside submodules), and in the case of with type the definition had to be another type constructor with the same type parameters.
A natural application of destructive substitution is merging two signatures sharing a type name.
One can also use this to completely remove a field:
or to rename one:
Note that you can also remove manifest types, by substituting with the same type.
(Introduced in OCaml 4.08)
|
Local substitutions behave like destructive substitutions (with ... := ...) but instead of being applied to a whole signature after the fact, they are introduced during the specification of the signature, and will apply to all the items that follow.
This provides a convenient way to introduce local names for types and modules when defining a signature:
Note that, unlike type declarations, type substitution declarations are not recursive, so substitutions like the following are rejected:
(Introduced in OCaml 4.13)
|
Module type substitution essentially behaves like type substitutions. They are useful to refine an abstract module type in a signature into a concrete module type,
It is also possible to substitute a concrete module type with an equivalent module types.
However, such substitutions are never necessary.
Destructive module type substitution removes the module type substitution from the signature
If the right hand side of the substitution is not a path, then the destructive substitution is only valid if the left-hand side of the substitution is never used as the type of a first-class module in the original module type.
(Introduced in OCaml 4.02)
|
The above specification, inside a signature, only matches a module definition equal to module-path. Conversely, a type-level module alias can be matched by itself, or by any supertype of the type of the module it references.
There are several restrictions on module-path:
Such specifications are also inferred. Namely, when P is a path satisfying the above constraints,
has type
Type-level module aliases are used when checking module path equalities. That is, in a context where module name N is known to be an alias for P, not only these two module paths check as equal, but F(N) and F(P) are also recognized as equal. In the default compilation mode, this is the only difference with the previous approach of module aliases having just the same module type as the module they reference.
When the compiler flag -no-alias-deps is enabled, type-level module aliases are also exploited to avoid introducing dependencies between compilation units. Namely, a module alias referring to a module inside another compilation unit does not introduce a link-time dependency on that compilation unit, as long as it is not dereferenced; it still introduces a compile-time dependency if the interface needs to be read, i.e. if the module is a submodule of the compilation unit, or if some type components are referred to. Additionally, accessing a module alias introduces a link-time dependency on the compilation unit containing the module referenced by the alias, rather than the compilation unit containing the alias. Note that these differences in link-time behavior may be incompatible with the previous behavior, as some compilation units might not be extracted from libraries, and their side-effects ignored.
These weakened dependencies make possible to use module aliases in place of the -pack mechanism. Suppose that you have a library Mylib composed of modules A and B. Using -pack, one would issue the command line
ocamlc -pack a.cmo b.cmo -o mylib.cmo
and as a result obtain a Mylib compilation unit, containing physically A and B as submodules, and with no dependencies on their respective compilation units. Here is a concrete example of a possible alternative approach:
module A = Mylib__A module B = Mylib__B
ocamlc -c -no-alias-deps Mylib.ml ocamlc -c -no-alias-deps -open Mylib Mylib__*.mli Mylib__*.ml
ocamlc -a Mylib*.cmo -o Mylib.cma
This approach lets you access A and B directly inside the library, and as Mylib.A and Mylib.B from outside. It also has the advantage that Mylib is no longer monolithic: if you use Mylib.A, only Mylib__A will be linked in, not Mylib__B.
Note the use of double underscores in Mylib__A and Mylib__B. These were chosen on purpose; the compiler uses the following heuristic when printing paths: given a path Lib__fooBar, if Lib.FooBar exists and is an alias for Lib__fooBar, then the compiler will always display Lib.FooBar instead of Lib__fooBar. This way the long Mylib__ names stay hidden and all the user sees is the nicer dot names. This is how the OCaml standard library is compiled.
(Introduced in OCaml 4.01)
|
Since OCaml 4.01, open statements shadowing an existing identifier (which is later used) trigger the warning 44. Adding a ! character after the open keyword indicates that such a shadowing is intentional and should not trigger the warning.
This is also available (since OCaml 4.06) for local opens in class expressions and class type expressions.
Generalized algebraic datatypes, or GADTs, extend usual sum types in two ways: constraints on type parameters may change depending on the value constructor, and some type variables may be existentially quantified. They are described in chapter 7.
(Introduced in OCaml 4.00)
|
Refutation cases. (Introduced in OCaml 4.03)
|
Explicit naming of existentials. (Introduced in OCaml 4.13.0)
|
(Introduced in Objective Caml 3.00)
|
This extension provides syntactic sugar for getting and setting elements in the arrays provided by the Bigarray module.
The short expressions are translated into calls to functions of the Bigarray module as described in the following table.
expression | translation |
expr0.{expr1} | Bigarray.Array1.get expr0 expr1 |
expr0.{expr1} <-expr | Bigarray.Array1.set expr0 expr1 expr |
expr0.{expr1, expr2} | Bigarray.Array2.get expr0 expr1 expr2 |
expr0.{expr1, expr2} <-expr | Bigarray.Array2.set expr0 expr1 expr2 expr |
expr0.{expr1, expr2, expr3} | Bigarray.Array3.get expr0 expr1 expr2 expr3 |
expr0.{expr1, expr2, expr3} <-expr | Bigarray.Array3.set expr0 expr1 expr2 expr3 expr |
expr0.{expr1, âŠ, exprn} | Bigarray.Genarray.get expr0 [| expr1, ⊠, exprn |] |
expr0.{expr1, âŠ, exprn} <-expr | Bigarray.Genarray.set expr0 [| expr1, ⊠, exprn |] expr |
The last two entries are valid for any n > 3.
(Introduced in OCaml 4.02, infix notations for constructs other than expressions added in 4.03)
Attributes are âdecorationsâ of the syntax tree which are mostly ignored by the type-checker but can be used by external tools. An attribute is made of an identifier and a payload, which can be a structure, a type expression (prefixed with :), a signature (prefixed with :) or a pattern (prefixed with ?) optionally followed by a when clause:
|
The first form of attributes is attached with a postfix notation on âalgebraicâ categories:
|
This form of attributes can also be inserted after the `tag-name in polymorphic variant type expressions (tag-spec-first, tag-spec, tag-spec-full) or after the method-name in method-type.
The same syntactic form is also used to attach attributes to labels and constructors in type declarations:
|
Note: when a label declaration is followed by a semi-colon, attributes can also be put after the semi-colon (in which case they are merged to those specified before).
The second form of attributes are attached to âblocksâ such as type declarations, class fields, etc:
|
A third form of attributes appears as stand-alone structure or signature items in the module or class sub-languages. They are not attached to any specific node in the syntax tree:
|
(Note: contrary to what the grammar above describes, item-attributes cannot be attached to these floating attributes in class-field-spec and class-field.)
It is also possible to specify attributes using an infix syntax. For instance:
let[@foo] x = 2 in x + 1 === (let x = 2 [@@foo] in x + 1) begin[@foo][@bar x] ... end === (begin ... end)[@foo][@bar x] module[@foo] M = ... === module M = ... [@@foo] type[@foo] t = T === type t = T [@@foo] method[@foo] m = ... === method m = ... [@@foo]
For let, the attributes are applied to each bindings:
let[@foo] x = 2 and y = 3 in x + y === (let x = 2 [@@foo] and y = 3 in x + y) let[@foo] x = 2 and[@bar] y = 3 in x + y === (let x = 2 [@@foo] and y = 3 [@@bar] in x + y)
Some attributes are understood by the type-checker:
(Introduced in OCaml 4.02, infix notations for constructs other than expressions added in 4.03, infix notation (e1 ;%ext e2) added in 4.04. )
Extension nodes are generic placeholders in the syntax tree. They are rejected by the type-checker and are intended to be âexpandedâ by external tools such as -ppx rewriters.
Extension nodes share the same notion of identifier and payload as attributesââ12.12.
The first form of extension node is used for âalgebraicâ categories:
|
A second form of extension node can be used in structures and signatures, both in the module and object languages:
|
An infix form is available for extension nodes when the payload is of the same kind (expression with expression, pattern with pattern ...).
Examples:
let%foo x = 2 in x + 1 === [%foo let x = 2 in x + 1] begin%foo ... end === [%foo begin ... end] x ;%foo 2 === [%foo x; 2] module%foo M = .. === [%%foo module M = ... ] val%foo x : t === [%%foo: val x : t]
When this form is used together with the infix syntax for attributes, the attributes are considered to apply to the payload:
fun%foo[@bar] x -> x + 1 === [%foo (fun x -> x + 1)[@bar ] ];
An additional shorthand let%foo x in ... is available for convenience when extension nodes are used to implement binding operators (See 12.23.1).
Furthermore, quoted strings {|...|} can be combined with extension nodes to embed foreign syntax fragments. Those fragments can be interpreted by a preprocessor and turned into OCaml code without requiring escaping quotes. A syntax shortcut is available for them:
{%%foo|...|} === [%%foo{|...|}] let x = {%foo|...|} === let x = [%foo{|...|}] let y = {%foo bar|...|bar} === let y = [%foo{bar|...|bar}]
For instance, you can use {%sql|...|} to represent arbitrary SQL statements â assuming you have a ppx-rewriter that recognizes the %sql extension.
Note that the word-delimited form, for example {sql|...|sql}, should not be used for signaling that an extension is in use. Indeed, the user cannot see from the code whether this string literal has different semantics than they expect. Moreover, giving semantics to a specific delimiter limits the freedom to change the delimiter to avoid escaping issues.
(Introduced in OCaml 4.03)
Some extension nodes are understood by the compiler itself:
(Introduced in OCaml 4.02)
|
Extensible variant types are variant types which can be extended with new variant constructors. Extensible variant types are defined using ... New variant constructors are added using +=.
Pattern matching on an extensible variant type requires a default case to handle unknown variant constructors:
A preexisting example of an extensible variant type is the built-in exn type used for exceptions. Indeed, exception constructors can be declared using the type extension syntax:
Extensible variant constructors can be rebound to a different name. This allows exporting variants from another module.
Extensible variant constructors can be declared private. As with regular variants, this prevents them from being constructed directly by constructor application while still allowing them to be de-structured in pattern-matching.
(Introduced in OCaml 4.06)
|
Extensible variant types can be declared private. This prevents new constructors from being declared directly, but allows extension constructors to be referred to in interfaces.
(Introduced in OCaml 4.02)
|
A generative functor takes a unit () argument. In order to use it, one must necessarily apply it to this unit argument, ensuring that all type components in the result of the functor behave in a generative way, i.e. they are different from types obtained by other applications of the same functor. This is equivalent to taking an argument of signature sig end, and always applying to struct end, but not to some defined module (in the latter case, applying twice to the same module would return identical types).
As a side-effect of this generativity, one is allowed to unpack first-class modules in the body of generative functors.
(Introduced in OCaml 4.02.2, extended in 4.03)
Some syntactic constructions are accepted during parsing and rejected during type checking. These syntactic constructions can therefore not be used directly in vanilla OCaml. However, -ppx rewriters and other external tools can exploit this parser leniency to extend the language with these new syntactic constructions by rewriting them to vanilla constructions.
(Introduced in OCaml 4.02.2, extended to unary operators in OCaml 4.12.0)
|
There are two classes of operators available for extensions: infix operators with a name starting with a # character and containing more than one # character, and unary operators with a name (starting with a ?, ~, or ! character) containing at least one # character.
For instance:
Note that both ## and !# must be eliminated by a ppx rewriter to make this example valid.
(Introduced in OCaml 4.03)
|
Int and float literals followed by an one-letter identifier in the range [g..zâŁG..Z] are extension-only literals.
(Introduced in OCaml 4.03)
|
The arguments of sum-type constructors can now be defined using the same syntax as records. Mutable and polymorphic fields are allowed. GADT syntax is supported. Attributes can be specified on individual fields.
Syntactically, building or matching constructors with such an inline record argument is similar to working with a unary constructor whose unique argument is a declared record type. A pattern can bind the inline record as a pseudo-value, but the record cannot escape the scope of the binding and can only be used with the dot-notation to extract or modify fields or to build new constructor values.
(Introduced in OCaml 4.03)
Comments which start with ** are treated specially by the compiler. They are automatically converted during parsing into attributes (see 12.12) to allow tools to process them as documentation.
Such comments can take three forms: floating comments, item comments and label comments. Any comment starting with ** which does not match one of these forms will cause the compiler to emit warning 50.
Comments which start with ** are also used by the ocamldoc documentation generator (see 19). The three comment forms recognised by the compiler are a subset of the forms accepted by ocamldoc (see 19.2).
Comments surrounded by blank lines that appear within structures, signatures, classes or class types are converted into floating-attributes. For example:
will be converted to:
Comments which appear immediately before or immediately after a structure item, signature item, class item or class type item are converted into item-attributes. Immediately before or immediately after means that there must be no blank lines, ;;, or other documentation comments between them. For example:
or
will be converted to:
Note that, if a comment appears immediately next to multiple items, as in:
then it will be attached to both items:
and the compiler will emit warning 50.
Comments which appear immediately after a labelled argument, record field, variant constructor, object method or polymorphic variant constructor are are converted into attributes. Immediately after means that there must be no blank lines or other documentation comments between them. For example:
will be converted to:
Note that label comments take precedence over item comments, so:
will be converted to:
whilst:
will be converted to:
In the absence of meaningful comment on the last constructor of a type, an empty commentââ(**) can be used instead:
will be converted directly to
(Introduced in 4.06)
|
This extension provides syntactic sugar for getting and setting elements for user-defined indexed types. For instance, we can define python-like dictionaries with
|
Multi-index are also supported through a second variant of indexing operators
which is called when an index literals contain a semicolon separated list of expressions with two and more elements:
In particular this multi-index notation makes it possible to uniformly handle indexing Genarray and other implementations of multidimensional arrays.
Beware that the differentiation between the multi-index and single index operators is purely syntactic: multi-index operators are restricted to index expressions that contain one or more semicolons ;. For instance,
is equivalent to
Notice that in the vec case, we are calling the single index operator, (.%{}), and not the multi-index variant, (.{;..}). For this reason, it is expected that most users of multi-index operators will need to define conjointly a single index variant
to handle both cases uniformly.
(Introduced in 4.07.0)
|
This extension allows user to define empty variants. Empty variant type can be eliminated by refutation case of pattern matching.
(Introduced in 4.08)
Since OCaml 4.08, it is possible to mark components (such as value or type declarations) in signatures with âalertsâ that will be reported when those components are referenced. This generalizes the notion of âdeprecatedâ components which were previously reported as warning 3. Those alerts can be used for instance to report usage of unsafe features, or of features which are only available on some platforms, etc.
Alert categories are identified by a symbolic identifier (a lowercase identifier, following the usual lexical rules) and an optional message. The identifier is used to control which alerts are enabled, and which ones are turned into fatal errors. The message is reported to the user when the alert is triggered (i.e. when the marked component is referenced).
The ocaml.alert or alert attribute serves two purposes: (i) to mark component with an alert to be triggered when the component is referenced, and (ii) to control which alert names are enabled. In the first form, the attribute takes an identifier possibly followed by a message. Here is an example of a value declaration marked with an alert:
module U: sig val fork: unit -> bool [@@alert unix "This function is only available under Unix."] end
Here unix is the identifier for the alert. If this alert category is enabled, any reference to U.fork will produce a message at compile time, which can be turned or not into a fatal error.
And here is another example as a floating attribute on top of an â.mliâ file (i.e. before any other non-attribute item) or on top of an â.mlâ file without a corresponding interface file, so that any reference to that unit will trigger the alert:
[@@@alert unsafe "This module is unsafe!"]
Controlling which alerts are enabled and whether they are turned into fatal errors is done either through the compilerâs command-line option -alert <spec> or locally in the code through the alert or ocaml.alert attribute taking a single string payload <spec>. In both cases, the syntax for <spec> is a concatenation of items of the form:
As a special case, if id is all, it stands for all alerts.
Here are some examples:
(* Disable all alerts, reenables just unix (as a soft alert) and window (as a fatal-error), for the rest of the current structure *) [@@@alert "-all--all+unix@window"] ... let x = (* Locally disable the window alert *) begin[@alert "-window"] ... end
Before OCaml 4.08, there was support for a single kind of deprecation alert. It is now known as the deprecated alert, but legacy attributes to trigger it and the legacy ways to control it as warning 3 are still supported. For instance, passing -w +3 on the command-line is equivant to -alert +deprecated, and:
val x: int [@@@ocaml.deprecated "Please do something else"]
is equivalent to:
val x: int [@@@ocaml.alert deprecated "Please do something else"]
(Introduced in 4.08)
|
This extension makes it possible to open any module expression in module structures and expressions. A similar mechanism is also available inside module types, but only for extended module paths (e.g. F(X).G(Y)).
For instance, a module can be constrained when opened with
Another possibility is to immediately open the result of a functor application
Going further, this construction can introduce local components inside a structure,
One important restriction is that types introduced by open struct ... end cannot appear in the signature of the enclosing structure, unless they are defined equal to some non-local type. So:
is OK, but:
is not because x cannot be given any type other than t, which only exists locally. Although the above would be OK if x too was local:
Inside signatures, extended opens are limited to extended module paths,
and not
open struct type t = int end
In those situations, local substitutions(see 12.7.2) can be used instead.
Beware that this extension is not available inside class definitions:
class c = let open Set.Make(Int) in ...
(Introduced in 4.08.0)
|
Users can define let operators:
and then apply them using this convenient syntax:
which is equivalent to this expanded form:
Users can also define and operators:
to support the syntax:
which is equivalent to this expanded form:
(Introduced in 4.13.0)
When the expression being bound is a variable, it can be convenient to use the shorthand notation let+ x in ..., which expands to let+ x = x in .... This notation, also known as let-punning, allows the sum3 function above can be written more concisely as:
This notation is also supported for extension nodes, expanding let%foo x in ... to let%foo x = x in .... However, to avoid confusion, this notation is not supported for plain let bindings.
This extension is intended to provide a convenient syntax for working with monads and applicatives.
An applicative should provide a module implementing the following interface:
where (let+) is bound to the map operation and (and+) is bound to the monoidal product operation.
A monad should provide a module implementing the following interface:
where (let*) is bound to the bind operation, and (and*) is also bound to the monoidal product operation.
(Introduced in 5.0)
Note: Effect handlers in OCaml 5.0 should be considered experimental. Effect handlers are exposed in the standard library as a thin wrapper around their implementations in the runtime. They are not supported as a language feature with new syntax. You can rely on them to build non-local control-flow abstractions such as user-level threading that do not expose the effect handler primitives to the user. Expect breaking changes in the future.
Effect handlers are a mechanism for modular programming with user-defined effects. Effect handlers allow the programmers to describe computations that perform effectful operations, whose meaning is described by handlers that enclose the computations. Effect handlers are a generalization of exception handlers and enable non-local control-flow mechanisms such as resumable exceptions, lightweight threads, coroutines, generators and asynchronous I/O to be composably expressed. In this tutorial, we shall see how some of these mechanisms can be built using effect handlers.
To understand the basics, let us define an effect (that is, an operation) that takes an integer argument and returns an integer result. We name this effect Xchg.
We declare the exchange effect Xchg by extending the pre-defined extensible variant type Effect.t with a new constructor Xchg: int -> int t. The declaration may be intuitively read as âthe Xchg effect takes an integer parameter, and when this effect is performed, it returns an integerâ. The computation comp1 performs the effect twice using the perform primitive and returns their sum.
We can handle the Xchg effect by implementing a handler that always returns the successor of the offered value:
try_with runs the computation comp1 () under an effect handler that handles the Xchg effect. As mentioned earlier, effect handlers are a generalization of exception handlers. Similar to exception handlers, when the computation performs the Xchg effect, the control jumps to the corresponding handler. However, unlike exception handlers, the handler is also provided with the delimited continuation k, which represents the suspended computation between the point of perform and this handler.
The handler uses the continue primitive to resume the suspended computation with the successor of the offered value. In this example, the computation comp1 performs Xchg 0 and Xchg 1 and receives the values 1 and 2 from the handler respectively. Hence, the whole expression evaluates to 3.
It is useful to note that the we must use locally abstract type (type a) in the effect handler. The type Effect.t is a GADT, and the effect declarations may have different type parameters for different effects. The type parameter a in the type a Effect.t represents the type of the value returned when performing the effect. From the fact that eff has type a Effect.t and from the fact that Xchg n has type int Effect.t, the type-checker deduces that a must be int, which is why we are allowed to pass the integer value n+1 as an argument to continue k.
Another point to note is that the catch-all case â| _ -> Noneâ is necessary when handling effects. This case may be intuitively read as âforward the unhandled effects to the outer handlerâ.
In this example, we use the deep version of the effect handlers here as opposed to the shallow version. A deep handler monitors a computation until the computation terminates (either normally or via an exception), and handles all of the effects performed (in sequence) by the computation. In contrast, a shallow handler monitors a computation until either the computation terminates or the computation performs one effect, and it handles this single effect only. In situations where they are applicable, deep handlers are usually preferred. An example that utilises shallow handlers is discussed later inââ12.24.6.
The expressive power of effect handlers comes from the delimited continuation. While the previous example immediately resumed the computation, the computation may be resumed later, running some other computation in the interim. Let us extend the previous example and implement message-passing concurrency between two concurrent computations using the Xchg effect. We call these concurrent computations tasks.
A task either is in a suspended state or is completed. We represent the task status as follows:
A task either is complete, with a result of type 'a, or is suspended with the message msg to send and the continuation cont. The type (int,'a status) continuation says that the suspended computation expects an int value to resume and returns a 'a status value when resumed.
Next, we define a step function that executes one step of computation until it completes or suspends:
The argument to the step function, f, is a computation that can perform an Xchg effect and returns a result of type 'a. The step function itself returns a 'a status value.
In the step function, we use the match_with primitive. Like try_with, match_with primitive installs an effect handler. However, unlike try_with, where only the effect case effc is provided, match_with expects the handlers for the value (retc) and exceptional (exnc) return cases. In fact, try_with can be defined using match_with as follows: let try_with f v {effc} = match_with f v {retc = Fun.id; exnc = raise; effc}.
In the step function,
Since the step function handles the Xchg effect, step f is a computation that does not perform the Xchg effect. It may however perform other effects. Moreover, since we are using deep handlers, the continuation cont stored in the status does not perform the Xchg effect.
We can now write a simple scheduler that runs a pair of tasks to completion:
Both of the tasks may run to completion, or both may offer to exchange a message. In the latter case, each computation receives the value offered by the other computation. The situation where one computation offers an exchange while the other computation terminates is regarded as a programmer error, and causes the handler to raise an exception
We can now define a second computation that also exchanges two messages:
Finally, we can run the two computations together:
The computation comp1 offers the values 0 and 1 and in exchange receives the values 21 and 21, which it adds, producing 42. The computation comp2 offers the values 21 and 21 and in exchange receives the values 0 and 1, which it multiplies, producing 0. The communication between the two computations is programmed entirely inside run_both. Indeed, the definitions of comp1 and comp2, alone, do not assign any meaning to the Xchg effect.
Let us extend the previous example for an arbitrary number of tasks. Many languages such as GHC Haskell and Go provide user-level threads as a primitive feature implemented in the runtime system. With effect handlers, user-level threads and their schedulers can be implemented in OCaml itself. Typically, user-level threading systems provide a fork primitive to spawn off a new concurrent task and a yield primitive to yield control to some other task. Correspondingly, we shall declare two effects as follows:
The Fork effect takes a thunk (a suspended computation, represented as a function of type unit -> unit) and returns a unit to the performer. The Yield effect is unparameterized and returns a unit when performed. Let us consider that a task performing an Xchg effect may match with any other task also offering to exchange a value.
We shall also define helper functions that simply perform these effects:
A top-level run function defines the scheduler:
We use a mutable queue run_q to hold the scheduler queue. The FIFO queue enables round-robin scheduling of tasks in the scheduler. enqueue inserts tasks into the queue, and dequeue extracts tasks from the queue and runs them. The reference cell exchanger holds a (suspended) task offering to exchange a value. At any time, there is either zero or one suspended task that is offering an exchange.
The heavy lifting is done by the spawn function. The spawn function runs the given computation f in an effect handler. If f returns with a value (case retc), we dequeue and run the next task from the scheduler queue. If the computation f raises an exception (case exnc), we print the exception and run the next task from the scheduler.
The computation f may also perform effects. If f performs the Yield effect, the current task is suspended (inserted into the queue of ready tasks), and the next task from the scheduler queue is run. If the effect is Fork f, then the current task is suspended, and the new task f is executed immediately via a tail call to spawn f. Note that this choice to run the new task first is arbitrary. We could very well have chosen instead to insert the task for f into the ready queue and resumed k immediately.
If the effect is Xchg, then we first check whether there is a task waiting to exchange. If so, we enqueue the waiting task with the current value being offered and immediately resume the current task with the value being offered. If not, we make the current task the waiting exchanger, and run the next task from the scheduler queue.
Note that this scheduler code is not perfect â it can leak resources. We shall explain and fix this in the next sectionââ12.24.3.
Now we can write a concurrent program that utilises the newly defined operations:
Observe that the messages from the two tasks are interleaved. Notice also that the snippet above makes no reference to the effect handlers and is in direct style (no monadic operations). This example illustrates that, with effect handlers, the user code in a concurrent program can remain in simple direct style, and the use of effect handlers can be fully contained within the concurrency library implementation.
In addition to resuming a continuation with a value, effect handlers also permit resuming by raising an effect at the point of perform. This is done with the help of the discontinue primitive. The discontinue primitive helps ensure that resources are always eventually deallocated, even in the presence of effects.
For example, consider the dequeue operation in the previous example reproduced below:
If the scheduler queue is empty, dequeue considers that the scheduler is done and returns to the caller. However, there may still be a task waiting to exchange a value (stored in the reference cell exchanger), which remains blocked forever! If the blocked task holds onto resources, these resources are leaked. For example, consider the following task:
The task writes the received message to the file secret.txt. It uses Fun.protect to ensure that the output channel oc is closed on both normal and exceptional return cases. Unfortunately, this is not sufficient. If the exchange effect xchg 0 cannot be matched with an exchange effect performed by some other thread, then this task remains blocked forever. Thus, the output channel oc is never closed.
To avoid this problem, one must adhere to a simple discipline: every continuation must be eventually either continued or discontinued. Here, we use discontinue to ensure that the blocked task does not remain blocked forever. By discontinuing this task, we force it to terminate (with an exception):
When the scheduler queue is empty and there is a blocked exchanger thread, the dequeue function discontinues the blocked thread with an Improper_synchronization exception. This exception is raised at the blocked xchg function call, which causes the finally block to be run and closes the output channel oc. From the point of view of the user, it seems as though the function call xchg 0 raises the exception Improper_synchronization.
When it comes to performing traversals on a data structure, there are two fundamental ways depending on whether the producer or the consumer has the control over the traversal. For example, in List.iter f l, the producer List.iter has the control and pushes the element to the consumer f who processes them. On the other hand, the Seq module provides a mechanism similar to delayed lists where the consumer controls the traversal. For example, Seq.forever Random.bool returns an infinite sequence of random bits where every bit is produced (on demand) when queried by the consumer.
Naturally, producers such as List.iter are easier to write in the former style. The latter style is ergonomically better for the consumer since it is preferable and more natural to be in control. To have the best of both worlds, we would like to write a producer in the former style and automatically convert it to the latter style. The conversion can be written once and for all as a library function, thanks to effect handlers. Let us name this function invert. We will first look at how to use the invert function before looking at its implementation details. The type of this function is given below:
The invert function takes an iter function (a producer that pushes elements to the consumer) and returns a sequence (where the consumer has the control). For example,
is an iter function with type (int -> unit) -> unit. The expression lst_iter f pushes the elements 1, 2 and 3 to the consumer f. For example,
The expression invert lst_iter returns a sequence that allows the consumer to traverse the list on demand. For example,
We can use the same invert function on any iter function. For example,
The implementation of the invert function is given below:
The invert function declares an effect Yield that takes the element to be yielded as a parameter. The yield function performs the Yield effect. The lambda abstraction fun () -> ... delays all action until the first element of the sequence is demanded. Once this happens, the computation iter yield is executed under an effect handler. Every time the iter function pushes an element to the yield function, the computation is interrupted by the Yield effect. The Yield effect is handled by returning the value Seq.Cons(v,continue k) to the consumer. The consumer gets the element v as well as the suspended computation, which in the consumerâs eyes is just the tail of sequence.
When the consumer demands the next element from the sequence (by applying it to ()), the continuation k is resumed. This allows the computation iter yield to make progress, until it either yields another element or terminates normally. In the latter case, the value Seq.Nil is returned, indicating to the consumer that the iteration is over.
It is important to note that the sequence returned by the invert function is ephemeral (as defined by the Seq module) i.e., the sequence must be used at most once. Additionally, the sequence must be fully consumed (i.e., used at least once) so as to ensure that the captured continuation is used linearly.
In this section, we shall see the semantics of effect handlers with the help of examples.
Like exception handlers, effect handlers can be nested.
In this example, the computation foo performs F, the inner handler handles only E and the outer handler handles F. The call to baz returns Hello, world!.
It is useful to know a little bit about the implementation of effect handlers to appreciate the design choices and their performance characteristics. Effect handlers are implemented with the help of runtime-managed, dynamically growing segments of stack called fibers. The program stack in OCaml is a linked list of such fibers.
A new fiber is allocated for evaluating the computation enclosed by an effect handler. The fiber is freed when the computation returns to the caller either normally by returning a value or by raising an exception.
At the point of perform in foo in the previous example, the program stack looks like this:
The two links correspond to the two effect handlers in the program. When the effect F is handled in baz, the program state looks as follows:
The delimited continuation k is an object on the heap that refers to the segment of the stack that corresponds to the suspended computation. Capturing a continuation does not involve copying stack frames. When the continuation is resumed, the stack is restored to the previous state by linking together the segment pointed to by k to the current stack. Since neither continuation capture nor resumption requires copying stack frames, suspending the execution using perform and resuming it using either continue or discontinue are fast.
Unlike languages such as Eff and Koka, effect handlers in OCaml do not provide effect safety; the compiler does not statically ensure that all the effects performed by the program are handled. If effects do not have a matching handler, then an Effect.Unhandled exception is raised at the point of the corresponding perform. For example, in the previous example, bar does not handle the effect F. Hence, we will get an Effect.Unhandled F exception when we run bar.
As discussed earlierââ12.24.3, the delimited continuations in OCaml must be used linearly â every captured continuation must be resumed either with a continue or discontinue exactly once. Attempting to use a continuation more than once raises a Continuation_already_resumed exception. For example:
The primary motivation for adding effect handlers to OCaml is to enable concurrent programming. One-shot continuations are sufficient for almost all concurrent programming needs. They are also much cheaper to implement compared to multi-shot continuations since they do not require stack frames to be copied. Moreover, OCaml programs may also manipulate linear resources such as sockets and file descriptors. The linearity discipline is easily broken if the continuations are allowed to resume more than once. It would be quite hard to debug such linearity violations on resources due to the lack of static checks for linearity and the non-local nature of control flow. Hence, OCaml does not support multi-shot continuations.
While the âat most once resumptionâ property of continuations is ensured with a dynamic check, there is no check to ensure that the continuations are resumed âat least onceâ. It is left to the user to ensure that the captured continuations are resumed at least once. Not resuming continuations will leak the memory allocated for the fibers as well as any resources that the suspended computation may hold.
One may install a finaliser on the captured continuation to ensure that the resources are freed:
In this case, if k becomes unreachable, then the finaliser ensures that the continuation stack is unwound by discontinuing with an Unwind exception, allowing the computation to free up resources. However, the runtime cost of finalisers is much more than the cost of capturing a continuation. Hence, it is recommended that the user take care of resuming the continuation exactly once rather than relying on the finaliser.
The examples that we have seen so far have used deep handlers. A deep handler handles all the effects performed (in sequence) by the computation. Whenever a continuation is captured in a deep handler, the captured continuation also includes the handler. This means that, when the continuation is resumed, the effect handler is automatically re-installed, and will handle the effect(s) that the computation may perform in the future.
OCaml also provides shallow handlers. Compared to deep handlers, a shallow handler handles only the first effect performed by the computation. The continuation captured in a shallow handler does not include the handler. This means that, when the continuation is resumed, the handler is no longer present. For this reason, when the continuation is resumed, the user is expected to provide a new effect handler (possibly a different one) to handle the next effect that the computation may perform.
Shallow handlers make it easier to express certain kinds of programs. Let us implement a shallow handler that enforces a particular sequence of effects (a protocol) on a computation. For this example, let us consider that the computation may perform the following effects:
Let us assume that we want to enforce a protocol that only permits an alternating sequence of Send and Recv effects that conform to the regular expression (Send;Recv)*;Send?. Hence, the sequence of effects [] (the empty sequence), [Send], [Send;Recv], [Send;Recv;Send], etc., are allowed, but not [Recv], [Send;Send], [Send;Recv;Recv], etc. The key observation here is that the set of effects handled evolves over time. We can enforce this protocol quite naturally using shallow handlers as shown below:
The run function executes the computation comp ensuring that it can only perform an alternating sequence of Send and Recv effects. The shallow handler uses a different set of primitives compared to the deep handler. The primitive fiber (on the last line) takes an 'a -> 'b function and returns a ('a,'b) Effects.Shallow.continuation. The expression continue_with k v h resumes the continuation k with value v under the handler h.
The mutually recursive functions loop_send and loop_recv resume the given continuation k with value v under different handlers. The loop_send function handles the Send effect and tail calls the loop_recv function. If the computation performs the Recv effect, then loop_send aborts the computation by raising an exception. Similarly, the loop_recv function handles the Recv effect and tail calls the loop_send function. If the computation performs the Send effect, then loop_recv aborts the computation. Given that the continuation captured in the shallow handler do not include the handler, there is only ever one handler installed in the dynamic scope of the computation comp.
The computation is initially executed by the loop_send function (see last line in the code above) which ensures that the first effect that the computation is allowed to perform is the Send effect. Note that the computation is free to perform effects other than Send and Recv, which may be handled by an outer handler.
We can see that the run function will permit a computation that follows the protocol:
and aborts those that do not:
We may implement the same example using deep handlers using reference cells (easy, but unsatisfying) or without them (harder). We leave this as an exercise to the reader.
PartââIII |
This chapter describes the OCaml batch compiler ocamlc, which compiles OCaml source files to bytecode object files and links these object files to produce standalone bytecode executable files. These executable files are then run by the bytecode interpreter ocamlrun.
The ocamlc command has a command-line interface similar to the one of most C compilers. It accepts several types of arguments and processes them sequentially, after all options have been processed:
If the interface file x.mli exists, the implementation x.ml is checked against the corresponding compiled interface x.cmi, which is assumed to exist. If no interface x.mli is provided, the compilation of x.ml produces a compiled interface file x.cmi in addition to the compiled object code file x.cmo. The file x.cmi produced corresponds to an interface that exports everything that is defined in the implementation x.ml.
The output of the linking phase is a file containing compiled bytecode that can be executed by the OCaml bytecode interpreter: the command named ocamlrun. If a.out is the name of the file produced by the linking phase, the command
ocamlrun a.out arg1 arg2 ⊠argn
executes the compiled code contained in a.out, passing it as arguments the character strings arg1 to argn. (See chapterââ15 for more details.)
On most systems, the file produced by the linking phase can be run directly, as in:
./a.out arg1 arg2 ⊠argn
The produced file has the executable bit set, and it manages to launch the bytecode interpreter by itself.
The compiler is able to emit some information on its internal stages. It can output .cmt files for the implementation of the compilation unit and .cmti for signatures if the option -bin-annot is passed to it (see the description of -bin-annot below). Each such file contains a typed abstract syntax tree (AST), that is produced during the type checking procedure. This tree contains all available information about the location and the specific type of each term in the source file. The AST is partial if type checking was unsuccessful.
These .cmt and .cmti files are typically useful for code inspection tools.
The following command-line options are recognized by ocamlc. The options -pack, -a, -c, -output-obj and -output-complete-obj are mutually exclusive.
If -custom, -cclib or -ccopt options are passed on the command line, these options are stored in the resulting .cmalibrary. Then, linking with this library automatically adds back the -custom, -cclib and -ccopt options as if they had been provided on the command line, unless the -noautolink option is given.
The environment variable OCAML_COLOR is considered if -color is not provided. Its values are auto/always/never as above.
If -color is not provided, OCAML_COLOR is not set and the environment variable NO_COLOR is set, then color output is disabled. Otherwise, the default setting is âautoâ, and the current heuristic checks that the TERM environment variable exists and is not empty or dumb, and that âisatty(stderr)â holds.
The environment variable OCAML_ERROR_STYLE is considered if -error-style is not provided. Its values are short/contextual as above.
Unix:â Never use the strip command on executables produced by ocamlc -custom, this would remove the bytecode part of the executable.
Unix:â Security warning: never set the âsetuidâ or âsetgidâ bits on executables produced by ocamlc -custom, this would make them vulnerable to attacks.
If the given directory starts with +, it is taken relative to the standard library directory. For instance, -I +unix adds the subdirectory unix of the standard library to the search path.
The -opaque option, available since 4.04, disables cross-module optimization information for the currently compiled unit. When compiling .mli interface, using -opaque marks the compiled .cmi interface so that subsequent compilations of modules that depend on it will not rely on the corresponding .cmx file, nor warn if it is absent. When the native compiler compiles a .ml implementation, using -opaque generates a .cmx that does not contain any cross-module optimization information.
Using this option may degrade the quality of generated code, but it reduces compilation time, both on clean and incremental builds. Indeed, with the native compiler, when the implementation of a compilation unit changes, all the units that depend on it may need to be recompiled â because the cross-module information may have changed. If the compilation unit whose implementation changed was compiled with -opaque, no such recompilation needs to occur. This option can thus be used, for example, to get faster edit-compile-test feedback loops.
ocamlc -pack -o p.cmo a.cmo b.cmo c.cmogenerates compiled files p.cmo and p.cmi describing a compilation unit having three sub-modules A, B and C, corresponding to the contents of the object files a.cmo, b.cmo and c.cmo. These contents can be referenced as P.A, P.B and P.C in the remainder of the program.
The warning-list argument is a sequence of warning specifiers, with no separators between them. A warning specifier is one of the following:
Alternatively, warning-list can specify a single warning using its mnemonic name (see below), as follows:
Warning numbers, letters and names which are not currently defined are ignored. The warnings are as follows (the name following each number specifies the mnemonic for that warning).
The default setting is -w +a-4-6-7-9-27-29-32..42-44-45-48-50-60. It is displayed by ocamlc -help. Note that warnings 5 and 10 are not always triggered, depending on the internals of the type checker.
Note: it is not recommended to use warning sets (i.e. letters) as arguments to -warn-error in production code, because this can break your build when future versions of OCaml add some new warnings.
The default setting is -warn-error -a+31 (only warning 31 is fatal).
Contextual control of command-line options
The compiler command line can be modified âfrom the outsideâ with the following mechanisms. These are experimental and subject to change. They should be used only for experimental and development work, not in released packages.
This short section is intended to clarify the relationship between the names of the modules corresponding to compilation units and the names of the files that contain their compiled interface and compiled implementation.
The compiler always derives the module name by taking the capitalized base name of the source file (.ml or .mli file). That is, it strips the leading directory name, if any, as well as the .ml or .mli suffix; then, it set the first letter to uppercase, in order to comply with the requirement that module names must be capitalized. For instance, compiling the file mylib/misc.ml provides an implementation for the module named Misc. Other compilation units may refer to components defined in mylib/misc.ml under the names Misc.name; they can also do open Misc, then use unqualified names name.
The .cmi and .cmo files produced by the compiler have the same base name as the source file. Hence, the compiled files always have their base name equal (modulo capitalization of the first letter) to the name of the module they describe (for .cmi files) or implement (for .cmo files).
When the compiler encounters a reference to a free module identifier Mod, it looks in the search path for a file named Mod.cmi or mod.cmi and loads the compiled interface contained in that file. As a consequence, renaming .cmi files is not advised: the name of a .cmi file must always correspond to the name of the compilation unit it implements. It is admissible to move them to another directory, if their base name is preserved, and the correct -I options are given to the compiler. The compiler will flag an error if it loads a .cmi file that has been renamed.
Compiled bytecode files (.cmo files), on the other hand, can be freely renamed once created. Thatâs because the linker never attempts to find by itself the .cmo file that implements a module with a given name: it relies instead on the user providing the list of .cmo files by hand.
This section describes and explains the most frequently encountered error messages.
If filename has the format mod.cmo, this means you are trying to link a bytecode object file that does not exist yet. Fix: compile mod.ml first.
If your program spans several directories, this error can also appear because you havenât specified the directories to look into. Fix: add the correct -I options to the command line.
In some cases, it is hard to understand why the two types t1 and t2 are incompatible. For instance, the compiler can report that âexpression of type foo cannot be used with type fooâ, and it really seems that the two types foo are compatible. This is not always true. Two type constructors can have the same name, but actually represent different types. This can happen if a type constructor is redefined. Example:
type foo = A | B let f = function A -> 0 | B -> 1 type foo = C | D f C
This result in the error message âexpression C of type foo cannot be used with type fooâ.
Non-generalized type variables in a type cause no difficulties inside a given structure or compilation unit (the contents of a .ml file, or an interactive session), but they cannot be allowed inside signatures nor in compiled interfaces (.cmi file), because they could be used inconsistently later. Therefore, the compiler flags an error when a structure or compilation unit defines a value name whose type contains non-generalized type variables. There are two ways to fix this error:
let sort_int_list = List.sort Stdlib.compare (* inferred type 'a list -> 'a list, with 'a not generalized *)write
let sort_int_list = (List.sort Stdlib.compare : int list -> int list);;
let map_length = List.map Array.length (* inferred type 'a array list -> int list, with 'a not generalized *)write
let map_length lv = List.map Array.length lv
Of course, you will always encounter this error if you have mutually recursive functions across modules. That is, function Mod1.f calls function Mod2.g, and function Mod2.g calls function Mod1.f. In this case, no matter what permutations you perform on the command line, the program will be rejected at link-time. Fixes:
mod1.ml: let f x = ... Mod2.g ... mod2.ml: let g y = ... Mod1.f ...define
mod1.ml: let f g x = ... g ... mod2.ml: let rec g y = ... Mod1.f g ...and link mod1.cmo before mod2.cmo.
mod1.ml: let forward_g = ref((fun x -> failwith "forward_g") : <type>) let f x = ... !forward_g ... mod2.ml: let g y = ... Mod1.f ... let _ = Mod1.forward_g := g
This section describes and explains in detail some warnings:
OCaml supports labels-omitted full applications: if the function has a known arity, all the arguments are unlabeled, and their number matches the number of non-optional parameters, then labels are ignored and non-optional parameters are matched in their definition order. Optional arguments are defaulted.
let f ~x ~y = x + y let test = f 2 3 > let test = f 2 3 > ^ > Warning 6 [labels-omitted]: labels x, y were omitted in the application of this function.
This support for labels-omitted application was introduced when labels were added to OCaml, to ease the progressive introduction of labels in a codebase. However, it has the downside of weakening the labeling discipline: if you use labels to prevent callers from mistakenly reordering two parameters of the same type, labels-omitted make this mistake possible again.
Warning 6 warns when labels-omitted applications are used, to discourage their use. When labels were introduced, this warning was not enabled by default, so users would use labels-omitted applications, often without noticing.
Over time, it has become idiomatic to enable this warning to avoid argument-order mistakes. The warning is now on by default, since OCaml 4.13. Labels-omitted applications are not recommended anymore, but users wishing to preserve this transitory style can disable the warning explicitly.
When pattern matching on records, it can be useful to match only few fields of a record. Eliding fields can be done either implicitly or explicitly by ending the record pattern with ; _. However, implicit field elision is at odd with pattern matching exhaustiveness checks. Enabling warning 9 prioritizes exhaustiveness checks over the convenience of implicit field elision and will warn on implicit field elision in record patterns. In particular, this warning can help to spot exhaustive record pattern that may need to be updated after the addition of new fields to a record type.
type 'a point = {x : 'a; y : 'a} let dx { x } = x (* implicit field elision: trigger warning 9 *) let dy { y; _ } = y (* explicit field elision: do not trigger warning 9 *)
Some constructors, such as the exception constructors Failure and Invalid_argument, take as parameter a string value holding a text message intended for the user.
These text messages are usually not stable over time: call sites building these constructors may refine the message in a future version to make it more explicit, etc. Therefore, it is dangerous to match over the precise value of the message. For example, until OCaml 4.02, Array.iter2 would raise the exception
Invalid_argument "arrays must have the same length"
Since 4.03 it raises the more helpful message
Invalid_argument "Array.iter2: arrays must have the same length"
but this means that any code of the form
try ... with Invalid_argument "arrays must have the same length" -> ...
is now broken and may suffer from uncaught exceptions.
Warning 52 is there to prevent users from writing such fragile code in the first place. It does not occur on every matching on a literal string, but only in the case in which library authors expressed their intent to possibly change the constructor parameter value in the future, by using the attribute ocaml.warn_on_literal_pattern (see the manual section on builtin attributes in 12.12.1):
In particular, all built-in exceptions with a string argument have this attribute set: Invalid_argument, Failure, Sys_error will all raise this warning if you match for a specific string argument.
Additionally, built-in exceptions with a structured argument that includes a string also have the attribute set: Assert_failure and Match_failure will raise the warning for a pattern that uses a literal string to match the first element of their tuple argument.
If your code raises this warning, you should not change the way you test for the specific string to avoid the warning (for example using a string equality inside the right-hand-side instead of a literal pattern), as your code would remain fragile. You should instead enlarge the scope of the pattern by matching on all possible values.
let warning = function | Foo _ -> 0 | _ -> 1
This may require some care: if the scrutinee may return several different cases of the same pattern, or raise distinct instances of the same exception, you may need to modify your code to separate those several cases.
For example,
try (int_of_string count_str, bool_of_string choice_str) with | Failure "int_of_string" -> (0, true) | Failure "bool_of_string" -> (-1, false)
should be rewritten into more atomic tests. For example, using the exception patterns documented in Sectionââ11.6, one can write:
match int_of_string count_str with | exception (Failure _) -> (0, true) | count -> begin match bool_of_string choice_str with | exception (Failure _) -> (-1, false) | choice -> (count, choice) end
The only case where that transformation is not possible is if a given function call may raise distinct exceptions with the same constructor but different string values. In this case, you will have to check for specific string values. This is dangerous API design and it should be discouraged: itâs better to define more precise exception constructors than store useful information in strings.
The semantics of or-patterns in OCaml is specified with a left-to-right bias: a value v matches the pattern p | q if it matches p or q, but if it matches both, the environment captured by the match is the environment captured by p, never the one captured by q.
While this property is generally intuitive, there is at least one specific case where a different semantics might be expected. Consider a pattern followed by a when-guard: |ââpââwhenââgââ->ââe, for example:
| ((Const x, _) | (_, Const x)) when is_neutral x -> branch
The semantics is clear: match the scrutinee against the pattern, if it matches, test the guard, and if the guard passes, take the branch. In particular, consider the input (Constââa, Constââb), where a fails the test is_neutralââa, while b passes the test is_neutralââb. With the left-to-right semantics, the clause above is not taken by its input: matching (Constââa, Constââb) against the or-pattern succeeds in the left branch, it returns the environment xââ->ââa, and then the guard is_neutralââa is tested and fails, the branch is not taken.
However, another semantics may be considered more natural here: any pair that has one side passing the test will take the branch. With this semantics the previous code fragment would be equivalent to
| (Const x, _) when is_neutral x -> branch | (_, Const x) when is_neutral x -> branch
This is not the semantics adopted by OCaml.
Warning 57 is dedicated to these confusing cases where the specified left-to-right semantics is not equivalent to a non-deterministic semantics (any branch can be taken) relatively to a specific guard. More precisely, it warns when guard uses âambiguousâ variables, that are bound to different parts of the scrutinees by different sides of a or-pattern.
This chapter describes the toplevel system for OCaml, that permits interactive use of the OCaml system through a read-eval-print loop (REPL). In this mode, the system repeatedly reads OCaml phrases from the input, then typechecks, compile and evaluate them, then prints the inferred type and result value, if any. The system prints a # (sharp) prompt before reading each phrase.
Input to the toplevel can span several lines. It is terminated by ;; (a double-semicolon). The toplevel input consists in one or several toplevel phrases, with the following syntax:
|
A phrase can consist of a definition, like those found in implementations of compilation units or in struct ⊠end module expressions. The definition can bind value names, type names, an exception, a module name, or a module type name. The toplevel system performs the bindings, then prints the types and values (if any) for the names thus defined.
A phrase may also consist in a value expression (sectionââ11.7). It is simply evaluated without performing any bindings, and its value is printed.
Finally, a phrase can also consist in a toplevel directive, starting with # (the sharp sign). These directives control the behavior of the toplevel; they are listed below in sectionââ14.2.
Unix:â The toplevel system is started by the command ocaml, as follows:ocaml options objects # interactive mode ocaml options objects scriptfile # script modeoptions are described below. objects are filenames ending in .cmo or .cma; they are loaded into the interpreter immediately after options are set. scriptfile is any file name not ending in .cmo or .cma.If no scriptfile is given on the command line, the toplevel system enters interactive mode: phrases are read on standard input, results are printed on standard output, errors on standard error. End-of-file on standard input terminates ocaml (see also the #quit directive in sectionââ14.2).
On start-up (before the first phrase is read), if the file .ocamlinit exists in the current directory, its contents are read as a sequence of OCaml phrases and executed as per the #use directive described in sectionââ14.2. The evaluation outcode for each phrase are not displayed. If the current directory does not contain an .ocamlinit file, the file XDG_CONFIG_HOME/ocaml/init.ml is looked up according to the XDG base directory specification and used instead (on Windows this is skipped). If that file doesnât exist then an [.ocamlinit] file in the usersâ home directory (determined via environment variable HOME) is used if existing.
The toplevel system does not perform line editing, but it can easily be used in conjunction with an external line editor such as ledit, or rlwrap. An improved toplevel, utop, is also available. Another option is to use ocaml under Gnu Emacs, which gives the full editing power of Emacs (command run-caml from library inf-caml).
At any point, the parsing, compilation or evaluation of the current phrase can be interrupted by pressing ctrl-C (or, more precisely, by sending the INTR signal to the ocaml process). The toplevel then immediately returns to the # prompt.
If scriptfile is given on the command-line to ocaml, the toplevel system enters script mode: the contents of the file are read as a sequence of OCaml phrases and executed, as per the #use directive (sectionââ14.2). The outcome of the evaluation is not printed. On reaching the end of file, the ocaml command exits immediately. No commands are read from standard input. Sys.argv is transformed, ignoring all OCaml parameters, and starting with the script file name in Sys.argv.(0).
In script mode, the first line of the script is ignored if it starts with #!. Thus, it should be possible to make the script itself executable and put as first line #!/usr/local/bin/ocaml, thus calling the toplevel system automatically when the script is run. However, ocaml itself is a #! script on most installations of OCaml, and Unix kernels usually do not handle nested #! scripts. A better solution is to put the following as the first line of the script:
#!/usr/local/bin/ocamlrun /usr/local/bin/ocaml
The following command-line options are recognized by the ocaml command.
If the given directory starts with +, it is taken relative to the standard library directory. For instance, -I +unix adds the subdirectory unix of the standard library to the search path.
Directories can also be added to the list once the toplevel is running with the #directory directive (sectionââ14.2).
The warning-list argument is a sequence of warning specifiers, with no separators between them. A warning specifier is one of the following:
Alternatively, warning-list can specify a single warning using its mnemonic name (see below), as follows:
Warning numbers, letters and names which are not currently defined are ignored. The warnings are as follows (the name following each number specifies the mnemonic for that warning).
The default setting is -w +a-4-6-7-9-27-29-32..42-44-45-48-50-60. It is displayed by -help. Note that warnings 5 and 10 are not always triggered, depending on the internals of the type checker.
Note: it is not recommended to use warning sets (i.e. letters) as arguments to -warn-error in production code, because this can break your build when future versions of OCaml add some new warnings.
The default setting is -warn-error -a+31 (only warning 31 is fatal).
Unix:â The following environment variables are also consulted:
- OCAMLTOP_INCLUDE_PATH
- Additional directories to search for compiled object code files (.cmi, .cmo and .cma). The specified directories are considered from left to right, after the include directories specified on the command line via -I have been searched. Available since OCaml 4.08.
- OCAMLTOP_UTF_8
- When printing string values, non-ascii bytes ( > \0x7E ) are printed as decimal escape sequence if OCAMLTOP_UTF_8 is set to false. Otherwise, they are printed unescaped.
- TERM
- When printing error messages, the toplevel system attempts to underline visually the location of the error. It consults the TERM variable to determines the type of output terminal and look up its capabilities in the terminal database.
- XDG_CONFIG_HOME, HOME
- .ocamlinit lookup procedure (see above).
The following directives control the toplevel behavior, load files in memory, and trace program execution.
Note: all directives start with a # (sharp) symbol. This # must be typed before the directive, and must not be confused with the # prompt displayed by the interactive loop. For instance, typing #quit;; will exit the toplevel loop, but typing quit;; will result in an âunbound value quitâ error.
For directives that take file names as arguments, if the given file name specifies no directory, the file is searched in the following directories:
The printing function printer-name should have type Format.formatter -> t -> unit, where t is the type for the values to be printed, and should output its textual representation for the value of type t on the given formatter, using the functions provided by the Format library. For backward compatibility, printer-name can also have type t -> unit and should then output on the standard formatter, but this usage is deprecated.
Toplevel phrases can refer to identifiers defined in compilation units with the same mechanisms as for separately compiled units: either by using qualified names (Modulename.localname), or by using the open construct and unqualified names (see sectionââ11.3).
However, before referencing another compilation unit, an implementation of that unit must be present in memory. At start-up, the toplevel system contains implementations for all the modules in the the standard library. Implementations for user modules can be entered with the #load directive described above. Referencing a unit for which no implementation has been provided results in the error Reference to undefined global `...'.
Note that entering open Mod merely accesses the compiled interface (.cmi file) for Mod, but does not load the implementation of Mod, and does not cause any error if no implementation of Mod has been loaded. The error âreference to undefined global Modâ will occur only when executing a value or module definition that refers to Mod.
This section describes and explains the most frequently encountered error messages.
If filename has the format mod.cmi, this means you have referenced the compilation unit mod, but its compiled interface could not be found. Fix: compile mod.mli or mod.ml first, to create the compiled interface mod.cmi.
If filename has the format mod.cmo, this means you are trying to load with #load a bytecode object file that does not exist yet. Fix: compile mod.ml first.
If your program spans several directories, this error can also appear because you havenât specified the directories to look into. Fix: use the #directory directive to add the correct directories to the search path.
The ocamlmktop command builds OCaml toplevels that contain user code preloaded at start-up.
The ocamlmktop command takes as argument a set of .cmo and .cma files, and links them with the object files that implement the OCaml toplevel. The typical use is:
ocamlmktop -o mytoplevel foo.cmo bar.cmo gee.cmo
This creates the bytecode file mytoplevel, containing the OCaml toplevel system, plus the code from the three .cmo files. This toplevel is directly executable and is started by:
./mytoplevel
This enters a regular toplevel loop, except that the code from foo.cmo, bar.cmo and gee.cmo is already loaded in memory, just as if you had typed:
#load "foo.cmo";; #load "bar.cmo";; #load "gee.cmo";;
on entrance to the toplevel. The modules Foo, Bar and Gee are not opened, though; you still have to do
open Foo;;
yourself, if this is what you wish.
The following command-line options are recognized by ocamlmktop.
This section describes a tool that is not yet officially supported but may be found useful.
OCaml code executing in the traditional toplevel system uses the bytecode interpreter. When increased performance is required, or for testing programs that will only execute correctly when compiled to native code, the native toplevel may be used instead.
For the majority of installations the native toplevel will not have been installed along with the rest of the OCaml toolchain. In such circumstances it will be necessary to build the OCaml distribution from source. From the built source tree of the distribution you may use make natruntop to build and execute a native toplevel. (Alternatively make ocamlnat can be used, which just performs the build step.)
If the make install command is run after having built the native toplevel then the ocamlnat program (either from the source or the installation directory) may be invoked directly rather than using make natruntop.
The ocamlrun command executes bytecode files produced by the linking phase of the ocamlc command.
The ocamlrun command comprises three main parts: the bytecode interpreter, that actually executes bytecode files; the memory allocator and garbage collector; and a set of C functions that implement primitive operations such as input/output.
The usage for ocamlrun is:
ocamlrun options bytecode-executable arg1 ... argn
The first non-option argument is taken to be the name of the file containing the executable bytecode. (That file is searched in the executable path as well as in the current directory.) The remaining arguments are passed to the OCaml program, in the string array Sys.argv. Element 0 of this array is the name of the bytecode executable file; elements 1 to n are the remaining arguments arg1 to argn.
As mentioned in chapterââ13, the bytecode executable files produced by the ocamlc command are self-executable, and manage to launch the ocamlrun command on themselves automatically. That is, assuming a.out is a bytecode executable file,
a.out arg1 ... argn
works exactly as
ocamlrun a.out arg1 ... argn
Notice that it is not possible to pass options to ocamlrun when invoking a.out directly.
Windows:â Under several versions of Windows, bytecode executable files are self-executable only if their name ends in .exe. It is recommended to always give .exe names to bytecode executables, e.g. compile with ocamlc -o myprog.exe ... rather than ocamlc -o myprog ....
The following command-line options are recognized by ocamlrun.
The following environment variables are also consulted:
If the option letter is not recognized, the whole parameter is ignored; if the equal sign or the number is missing, the value is taken as 1; if the multiplier is not recognized, it is ignored.
For example, on a 32-bit machine, under bash the command
export OCAMLRUNPARAM='b,s=256k,v=0x015'
tells a subsequent ocamlrun to print backtraces for uncaught exceptions, set its initial minor heap size to 1ââmegabyte and print a message at the start of each major GC cycle, when the heap size changes, and when compaction is triggered.
On platforms that support dynamic loading, ocamlrun can link dynamically with C shared libraries (DLLs) providing additional C primitives beyond those provided by the standard runtime system. The names for these libraries are provided at link time as described in sectionââ22.1.4), and recorded in the bytecode executable file; ocamlrun, then, locates these libraries and resolves references to their primitives when the bytecode executable program starts.
The ocamlrun command searches shared libraries in the following directories, in the order indicated:
This section describes and explains the most frequently encountered error messages.
To help you diagnose this error, run your program with the -v option to ocamlrun, or with the OCAMLRUNPARAM environment variable set to v=63. If it displays lots of âGrowing stackâŠâ messages, this is probably a looping recursive function. If it displays lots of âGrowing heapâŠâ messages, with the heap size growing slowly, this is probably an attempt to construct a data structure with too many (infinitely many?) cells. If it displays few âGrowing heapâŠâ messages, but with a huge increment in the heap size, this is probably an attempt to build an excessively large array, string or byte sequence.
This chapter describes the OCaml high-performance native-code compiler ocamlopt, which compiles OCaml source files to native code object files and links these object files to produce standalone executables.
The native-code compiler is only available on certain platforms. It produces code that runs faster than the bytecode produced by ocamlc, at the cost of increased compilation time and executable code size. Compatibility with the bytecode compiler is extremely high: the same source code should run identically when compiled with ocamlc and ocamlopt.
It is not possible to mix native-code object files produced by ocamlopt with bytecode object files produced by ocamlc: a program must be compiled entirely with ocamlopt or entirely with ocamlc. Native-code object files produced by ocamlopt cannot be loaded in the toplevel system ocaml.
The ocamlopt command has a command-line interface very close to that of ocamlc. It accepts the same types of arguments, and processes them sequentially, after all options have been processed:
The implementation is checked against the interface file x.mli (if it exists) as described in the manual for ocamlc (chapterââ13).
The output of the linking phase is a regular Unix or Windows executable file. It does not need ocamlrun to run.
The compiler is able to emit some information on its internal stages:
These .cmt and .cmti files are typically useful for code inspection tools.
An external tool can perform low-level optimisations, such as code layout, by transforming a .cmir-linear file. To continue compilation, the compiler can be invoked with (a possibly modified) .cmir-linear file as an argument, instead of the corresponding source file.
The following command-line options are recognized by ocamlopt. The options -pack, -a, -shared, -c, -output-obj and -output-complete-obj are mutually exclusive.
If -cclib or -ccopt options are passed on the command line, these options are stored in the resulting .cmxalibrary. Then, linking with this library automatically adds back the -cclib and -ccopt options as if they had been provided on the command line, unless the -noautolink option is given.
The environment variable OCAML_COLOR is considered if -color is not provided. Its values are auto/always/never as above.
If -color is not provided, OCAML_COLOR is not set and the environment variable NO_COLOR is set, then color output is disabled. Otherwise, the default setting is âautoâ, and the current heuristic checks that the TERM environment variable exists and is not empty or dumb, and that âisatty(stderr)â holds.
The environment variable OCAML_ERROR_STYLE is considered if -error-style is not provided. Its values are short/contextual as above.
If the given directory starts with +, it is taken relative to the standard library directory. For instance, -I +unix adds the subdirectory unix of the standard library to the search path.
The -opaque option, available since 4.04, disables cross-module optimization information for the currently compiled unit. When compiling .mli interface, using -opaque marks the compiled .cmi interface so that subsequent compilations of modules that depend on it will not rely on the corresponding .cmx file, nor warn if it is absent. When the native compiler compiles a .ml implementation, using -opaque generates a .cmx that does not contain any cross-module optimization information.
Using this option may degrade the quality of generated code, but it reduces compilation time, both on clean and incremental builds. Indeed, with the native compiler, when the implementation of a compilation unit changes, all the units that depend on it may need to be recompiled â because the cross-module information may have changed. If the compilation unit whose implementation changed was compiled with -opaque, no such recompilation needs to occur. This option can thus be used, for example, to get faster edit-compile-test feedback loops.
ocamlopt -pack -o P.cmx A.cmx B.cmx C.cmxgenerates compiled files P.cmx, P.o and P.cmi describing a compilation unit having three sub-modules A, B and C, corresponding to the contents of the object files A.cmx, B.cmx and C.cmx. These contents can be referenced as P.A, P.B and P.C in the remainder of the program.
The .cmx object files being combined must have been compiled with the appropriate -for-pack option. In the example above, A.cmx, B.cmx and C.cmx must have been compiled with ocamlopt -for-pack P.
Multiple levels of packing can be achieved by combining -pack with -for-pack. Consider the following example:
ocamlopt -for-pack P.Q -c A.ml ocamlopt -pack -o Q.cmx -for-pack P A.cmx ocamlopt -for-pack P -c B.ml ocamlopt -pack -o P.cmx Q.cmx B.cmx
The resulting P.cmx object file has sub-modules P.Q, P.Q.A and P.B.
This experimental feature enables external tools to inspect and manipulate compilerâs intermediate representation of the program using compiler-libs library (see chapterââ29 and Compiler_libs ).
The warning-list argument is a sequence of warning specifiers, with no separators between them. A warning specifier is one of the following:
Alternatively, warning-list can specify a single warning using its mnemonic name (see below), as follows:
Warning numbers, letters and names which are not currently defined are ignored. The warnings are as follows (the name following each number specifies the mnemonic for that warning).
The default setting is -w +a-4-6-7-9-27-29-32..42-44-45-48-50-60. It is displayed by ocamlopt -help. Note that warnings 5 and 10 are not always triggered, depending on the internals of the type checker.
Note: it is not recommended to use warning sets (i.e. letters) as arguments to -warn-error in production code, because this can break your build when future versions of OCaml add some new warnings.
The default setting is -warn-error -a+31 (only warning 31 is fatal).
The 32-bit code generator for Intel/AMD x86 processors (i386 architecture) supports the following additional option:
The 64-bit code generator for Intel/AMD x86 processors (amd64 architecture) supports the following additional options:
The PowerPC code generator supports the following additional options:
The compiler command line can be modified âfrom the outsideâ with the following mechanisms. These are experimental and subject to change. They should be used only for experimental and development work, not in released packages.
The error messages are almost identical to those of ocamlc. See sectionââ13.4.
Executables generated by ocamlopt are native, stand-alone executable files that can be invoked directly. They do not depend on the ocamlrun bytecode runtime system nor on dynamically-loaded C/OCaml stub libraries.
During execution of an ocamlopt-generated executable, the following environment variables are also consulted:
This section lists the known incompatibilities between the bytecode compiler and the native-code compiler. Except on those points, the two compilers should generate code that behave identically.
let _ = ignore M.fcontains a reference to compilation unit M when compiled to bytecode. This reference forces M to be linked and its initialization code to be executed. The native-code compiler eliminates the reference to M, hence the compilation unit M may not be linked and executed. A workaround is to compile M with the -linkall flag so that it will always be linked and executed, even if not referenced. See also the Sys.opaque_identity function from the Sys standard library module.
This chapter describes two program generators: ocamllex, that produces a lexical analyzer from a set of regular expressions with associated semantic actions, and ocamlyacc, that produces a parser from a grammar with associated semantic actions.
These program generators are very close to the well-known lex and yacc commands that can be found in most C programming environments. This chapter assumes a working knowledge of lex and yacc: while it describes the input syntax for ocamllex and ocamlyacc and the main differences with lex and yacc, it does not explain the basics of writing a lexer or parser description in lex and yacc. Readers unfamiliar with lex and yacc are referred to âCompilers: principles, techniques, and toolsâ by Aho, Lam, Sethi and Ullman (Pearson, 2006), or âLex & Yaccâ, by Levine, Mason and Brown (OâReilly, 1992).
The ocamllex command produces a lexical analyzer from a set of regular expressions with attached semantic actions, in the style of lex. Assuming the input file is lexer.mll, executing
ocamllex lexer.mll
produces OCaml code for a lexical analyzer in file lexer.ml. This file defines one lexing function per entry point in the lexer definition. These functions have the same names as the entry points. Lexing functions take as argument a lexer buffer, and return the semantic attribute of the corresponding entry point.
Lexer buffers are an abstract data type implemented in the standard library module Lexing. The functions Lexing.from_channel, Lexing.from_string and Lexing.from_function create lexer buffers that read from an input channel, a character string, or any reading function, respectively. (See the description of module Lexing in chapterââ28.)
When used in conjunction with a parser generated by ocamlyacc, the semantic actions compute a value belonging to the type token defined by the generated parsing module. (See the description of ocamlyacc below.)
The following command-line options are recognized by ocamllex.
The format of lexer definitions is as follows:
{ header } let ident = regexp ⊠[refill { refill-handler }] rule entrypoint [arg1⊠argn] = parse regexp { action } | ⊠| regexp { action } and entrypoint [arg1⊠argn] = parse ⊠and ⊠{ trailer }
Comments are delimited by (* and *), as in OCaml. The parse keyword, can be replaced by the shortest keyword, with the semantic consequences explained below.
Refill handlers are a recent (optional) feature introduced in 4.02, documented below in subsectionââ17.2.7.
The header and trailer sections are arbitrary OCaml text enclosed in curly braces. Either or both can be omitted. If present, the header text is copied as is at the beginning of the output file and the trailer text at the end. Typically, the header section contains the open directives required by the actions, and possibly some auxiliary functions used in the actions.
Between the header and the entry points, one can give names to frequently-occurring regular expressions. This is written let ident = regexp. In regular expressions that follow this declaration, the identifier ident can be used as shorthand for regexp.
The names of the entry points must be valid identifiers for OCaml values (starting with a lowercase letter). Similarly, the arguments arg1⊠argn must be valid identifiers for OCaml. Each entry point becomes an OCaml function that takes n+1 arguments, the extra implicit last argument being of type Lexing.lexbuf. Characters are read from the Lexing.lexbuf argument and matched against the regular expressions provided in the rule, until a prefix of the input matches one of the rule. The corresponding action is then evaluated and returned as the result of the function.
If several regular expressions match a prefix of the input, the âlongest matchâ rule applies: the regular expression that matches the longest prefix of the input is selected. In case of tie, the regular expression that occurs earlier in the rule is selected.
However, if lexer rules are introduced with the shortest keyword in place of the parse keyword, then the âshortest matchâ rule applies: the shortest prefix of the input is selected. In case of tie, the regular expression that occurs earlier in the rule is still selected. This feature is not intended for use in ordinary lexical analyzers, it may facilitate the use of ocamllex as a simple text processing tool.
The regular expressions are in the style of lex, with a more OCaml-like syntax.
|
Concerning the precedences of operators, # has the highest precedence, followed by *, + and ?, then concatenation, then | (alternation), then as.
The actions are arbitrary OCaml expressions. They are evaluated in a context where the identifiers defined by using the as construct are bound to subparts of the matched string. Additionally, lexbuf is bound to the current lexer buffer. Some typical uses for lexbuf, in conjunction with the operations on lexer buffers provided by the Lexing standard library module, are listed below.
The as construct is similar to âgroupsâ as provided by numerous regular expression packages. The type of these variables can be string, char, string option or char option.
We first consider the case of linear patterns, that is the case when all as bound variables are distinct. In regexp as ident, the type of ident normally is string (or string option) except when regexp is a character constant, an underscore, a string constant of length one, a character set specification, or an alternation of those. Then, the type of ident is char (or char option). Option types are introduced when overall rule matching does not imply matching of the bound sub-pattern. This is in particular the case of ( regexp as ident ) ? and of regexp1 | ( regexp2 as ident ).
There is no linearity restriction over as bound variables. When a variable is bound more than once, the previous rules are to be extended as follows:
For instance, in ('a' as x) | ( 'a' (_ as x) ) the variable x is of type char, whereas in ("ab" as x) | ( 'a' (_ as x) ? ) the variable x is of type string option.
In some cases, a successful match may not yield a unique set of bindings.
For instance the matching of aba
by the regular expression
(('a'|"ab") as x) (("ba"|'a') as y) may result in binding
either
x
to "ab"
and y
to "a"
, or
x
to "a"
and y
to "ba"
.
The automata produced ocamllex on such ambiguous regular
expressions will select one of the possible resulting sets of
bindings.
The selected set of bindings is purposely left unspecified.
By default, when ocamllex reaches the end of its lexing buffer, it will silently call the refill_buff function of lexbuf structure and continue lexing. It is sometimes useful to be able to take control of refilling action; typically, if you use a library for asynchronous computation, you may want to wrap the refilling action in a delaying function to avoid blocking synchronous operations.
Since OCaml 4.02, it is possible to specify a refill-handler, a function that will be called when refill happens. It is passed the continuation of the lexing, on which it has total control. The OCaml expression used as refill action should have a type that is an instance of
(Lexing.lexbuf -> 'a) -> Lexing.lexbuf -> 'a
where the first argument is the continuation which captures the processing ocamllex would usually perform (refilling the buffer, then calling the lexing function again), and the result type that instantiates [âa] should unify with the result type of all lexing rules.
As an example, consider the following lexer that is parametrized over an arbitrary monad:
{ type token = EOL | INT of int | PLUS module Make (M : sig type 'a t val return: 'a -> 'a t val bind: 'a t -> ('a -> 'b t) -> 'b t val fail : string -> 'a t (* Set up lexbuf *) val on_refill : Lexing.lexbuf -> unit t end) = struct let refill_handler k lexbuf = M.bind (M.on_refill lexbuf) (fun () -> k lexbuf) } refill {refill_handler} rule token = parse | [' ' '\t'] { token lexbuf } | '\n' { M.return EOL } | ['0'-'9']+ as i { M.return (INT (int_of_string i)) } | '+' { M.return PLUS } | _ { M.fail "unexpected character" } { end }
All identifiers starting with __ocaml_lex are reserved for use by ocamllex; do not use any such identifier in your programs.
The ocamlyacc command produces a parser from a context-free grammar specification with attached semantic actions, in the style of yacc. Assuming the input file is grammar.mly, executing
ocamlyacc options grammar.mly
produces OCaml code for a parser in the file grammar.ml, and its interface in file grammar.mli.
The generated module defines one parsing function per entry point in the grammar. These functions have the same names as the entry points. Parsing functions take as arguments a lexical analyzer (a function from lexer buffers to tokens) and a lexer buffer, and return the semantic attribute of the corresponding entry point. Lexical analyzer functions are usually generated from a lexer specification by the ocamllex program. Lexer buffers are an abstract data type implemented in the standard library module Lexing. Tokens are values from the concrete type token, defined in the interface file grammar.mli produced by ocamlyacc.
Grammar definitions have the following format:
%{ header %} declarations %% rules %% trailer
Comments are enclosed between /*
and */
(as in C) in the
âdeclarationsâ and ârulesâ sections, and between (*
and
*)
(as in OCaml) in the âheaderâ and âtrailerâ sections.
The header and the trailer sections are OCaml code that is copied as is into file grammar.ml. Both sections are optional. The header goes at the beginning of the output file; it usually contains open directives and auxiliary functions required by the semantic actions of the rules. The trailer goes at the end of the output file.
Declarations are given one per line. They all start with a %
sign.
open
directives (e.g. open Modname
) were given in the
header section. Thatâs because the header is copied only to the .ml
output file, but not to the .mli output file, while the typexpr part
of a %token
declaration is copied to both.%type
directive below.-s
option is in effect). The typexpr part is an arbitrary OCaml
type expression, except that all type constructor names must be
fully qualified, as explained above for %token.Associate precedences and associativities to the given symbols. All
symbols on the same line are given the same precedence. They have
higher precedence than symbols declared before in a %left
,
%right
or %nonassoc
line. They have lower precedence
than symbols declared after in a %left
, %right
or
%nonassoc
line. The symbols are declared to associate to the
left (%left
), to the right (%right
), or to be
non-associative (%nonassoc
). The symbols are usually tokens.
They can also be dummy nonterminals, for use with the %prec
directive inside the rules.
The precedence declarations are used in the following way to resolve reduce/reduce and shift/reduce conflicts:
The syntax for rules is as usual:
nonterminal : symbol ⊠symbol { semantic-action } | ⊠| symbol ⊠symbol { semantic-action } ;
Rules can also contain the %prec
symbol directive in the
right-hand side part, to override the default precedence and
associativity of the rule with the precedence and associativity of the
given symbol.
Semantic actions are arbitrary OCaml expressions, that
are evaluated to produce the semantic attribute attached to
the defined nonterminal. The semantic actions can access the
semantic attributes of the symbols in the right-hand side of
the rule with the $
notation: $1
is the attribute for the
first (leftmost) symbol, $2
is the attribute for the second
symbol, etc.
The rules may contain the special symbol error to indicate resynchronization points, as in yacc.
Actions occurring in the middle of rules are not supported.
Nonterminal symbols are like regular OCaml symbols, except that they cannot end with ' (single quote).
Error recovery is supported as follows: when the parser reaches an error state (no grammar rules can apply), it calls a function named parse_error with the string "syntax error" as argument. The default parse_error function does nothing and returns, thus initiating error recovery (see below). The user can define a customized parse_error function in the header section of the grammar file.
The parser also enters error recovery mode if one of the grammar actions raises the Parsing.Parse_error exception.
In error recovery mode, the parser discards states from the stack until it reaches a place where the error token can be shifted. It then discards tokens from the input until it finds three successive tokens that can be accepted, and starts processing with the first of these. If no state can be uncovered where the error token can be shifted, then the parser aborts by raising the Parsing.Parse_error exception.
Refer to documentation on yacc for more details and guidance in how to use error recovery.
The ocamlyacc command recognizes the following options:
At run-time, the ocamlyacc-generated parser can be debugged by setting the p option in the OCAMLRUNPARAM environment variable (see sectionââ15.2). This causes the pushdown automaton executing the parser to print a trace of its action (tokens shifted, rules reduced, etc). The trace mentions rule numbers and state numbers that can be interpreted by looking at the file grammar.output generated by ocamlyacc -v.
The all-time favorite: a desk calculator. This program reads arithmetic expressions on standard input, one per line, and prints their values. Here is the grammar definition:
/* File parser.mly */ %token <int> INT %token PLUS MINUS TIMES DIV %token LPAREN RPAREN %token EOL %left PLUS MINUS /* lowest precedence */ %left TIMES DIV /* medium precedence */ %nonassoc UMINUS /* highest precedence */ %start main /* the entry point */ %type <int> main %% main: expr EOL { $1 } ; expr: INT { $1 } | LPAREN expr RPAREN { $2 } | expr PLUS expr { $1 + $3 } | expr MINUS expr { $1 - $3 } | expr TIMES expr { $1 * $3 } | expr DIV expr { $1 / $3 } | MINUS expr %prec UMINUS { - $2 } ;
Here is the definition for the corresponding lexer:
(* File lexer.mll *) { open Parser (* The type token is defined in parser.mli *) exception Eof } rule token = parse [' ' '\t'] { token lexbuf } (* skip blanks *) | ['\n' ] { EOL } | ['0'-'9']+ as lxm { INT(int_of_string lxm) } | '+' { PLUS } | '-' { MINUS } | '*' { TIMES } | '/' { DIV } | '(' { LPAREN } | ')' { RPAREN } | eof { raise Eof }
Here is the main program, that combines the parser with the lexer:
(* File calc.ml *) let _ = try let lexbuf = Lexing.from_channel stdin in while true do let result = Parser.main Lexer.token lexbuf in print_int result; print_newline(); flush stdout done with Lexer.Eof -> exit 0
To compile everything, execute:
ocamllex lexer.mll # generates lexer.ml ocamlyacc parser.mly # generates parser.ml and parser.mli ocamlc -c parser.mli ocamlc -c lexer.ml ocamlc -c parser.ml ocamlc -c calc.ml ocamlc -o calc lexer.cmo parser.cmo calc.cmo
The deterministic automata generated by ocamllex are limited to at most 32767 transitions. The message above indicates that your lexer definition is too complex and overflows this limit. This is commonly caused by lexer definitions that have separate rules for each of the alphabetic keywords of the language, as in the following example.
rule token = parse "keyword1" { KWD1 } | "keyword2" { KWD2 } | ... | "keyword100" { KWD100 } | ['A'-'Z' 'a'-'z'] ['A'-'Z' 'a'-'z' '0'-'9' '_'] * as id { IDENT id}
To keep the generated automata small, rewrite those definitions with only one general âidentifierâ rule, followed by a hashtable lookup to separate keywords from identifiers:
{ let keyword_table = Hashtbl.create 53 let _ = List.iter (fun (kwd, tok) -> Hashtbl.add keyword_table kwd tok) [ "keyword1", KWD1; "keyword2", KWD2; ... "keyword100", KWD100 ] } rule token = parse ['A'-'Z' 'a'-'z'] ['A'-'Z' 'a'-'z' '0'-'9' '_'] * as id { try Hashtbl.find keyword_table id with Not_found -> IDENT id }
Parsers generated by ocamlyacc are not thread-safe. Those parsers rely on an internal work state which is shared by all ocamlyacc generated parsers. The menhir parser generator is a better option if you want thread-safe parsers.
The ocamldep command scans a set of OCaml source files (.ml and .mli files) for references to external compilation units, and outputs dependency lines in a format suitable for the make utility. This ensures that make will compile the source files in the correct order, and recompile those files that need to when a source file is modified.
The typical usage is:
ocamldep options *.mli *.ml > .depend
where *.mli *.ml expands to all source files in the current directory and .depend is the file that should contain the dependencies. (See below for a typical Makefile.)
Dependencies are generated both for compiling with the bytecode compiler ocamlc and with the native-code compiler ocamlopt.
The following command-line options are recognized by ocamldep.
filename: Module1 Module2 ... ModuleNwhere Module1, âŠ, ModuleN are the names of the compilation units referenced within the file filename, but these names are not resolved to source file names. Such raw dependencies cannot be used by make, but can be post-processed by other tools such as Omake.
Here is a template Makefile for a OCaml program.
OCAMLC=ocamlc OCAMLOPT=ocamlopt OCAMLDEP=ocamldep INCLUDES= # all relevant -I options here OCAMLFLAGS=$(INCLUDES) # add other options for ocamlc here OCAMLOPTFLAGS=$(INCLUDES) # add other options for ocamlopt here # prog1 should be compiled to bytecode, and is composed of three # units: mod1, mod2 and mod3. # The list of object files for prog1 PROG1_OBJS=mod1.cmo mod2.cmo mod3.cmo prog1: $(PROG1_OBJS) $(OCAMLC) -o prog1 $(OCAMLFLAGS) $(PROG1_OBJS) # prog2 should be compiled to native-code, and is composed of two # units: mod4 and mod5. # The list of object files for prog2 PROG2_OBJS=mod4.cmx mod5.cmx prog2: $(PROG2_OBJS) $(OCAMLOPT) -o prog2 $(OCAMLFLAGS) $(PROG2_OBJS) # Common rules %.cmo: %.ml $(OCAMLC) $(OCAMLFLAGS) -c $< %.cmi: %.mli $(OCAMLC) $(OCAMLFLAGS) -c $< %.cmx: %.ml $(OCAMLOPT) $(OCAMLOPTFLAGS) -c $< # Clean up clean: rm -f prog1 prog2 rm -f *.cm[iox] # Dependencies depend: $(OCAMLDEP) $(INCLUDES) *.mli *.ml > .depend include .depend
If you use module aliases to give shorter names to modules, you need to change the above definitions. Assuming that your map file is called mylib.mli, here are minimal modifications.
OCAMLFLAGS=$(INCLUDES) -open Mylib mylib.cmi: mylib.mli $(OCAMLC) $(INCLUDES) -no-alias-deps -w -49 -c $< depend: $(OCAMLDEP) $(INCLUDES) -map mylib.mli $(PROG1_OBJS:.cmo=.ml) > .depend
Note that in this case you should not compute dependencies for mylib.mli together with the other files, hence the need to pass explicitly the list of files to process. If mylib.mli itself has dependencies, you should compute them using -as-map.
This chapter describes OCamldoc, a tool that generates documentation from special comments embedded in source files. The comments used by OCamldoc are of the form (**âŠ*) and follow the format described in section 19.2.
OCamldoc can produce documentation in various formats: HTML, LATEX, TeXinfo, Unix man pages, and dot dependency graphs. Moreover, users can add their own custom generators, as explained in section 19.3.
In this chapter, we use the word element to refer to any of the following parts of an OCaml source file: a type declaration, a value, a module, an exception, a module type, a type constructor, a record field, a class, a class type, a class method, a class value or a class inheritance clause.
OCamldoc is invoked via the command ocamldoc, as follows:
ocamldoc options sourcefiles
The following options determine the format for the generated documentation.
OCamldoc calls the OCaml type-checker to obtain type information. The following options impact the type-checking phase. They have the same meaning as for the ocamlc and ocamlopt commands.
The following options apply in conjunction with the -html option:
module M : functor (A:Module) -> functor (B:Module2) -> sig .. endis displayed as:
module M (A:Module) (B:Module2) : sig .. end
The following options apply in conjunction with the -latex option:
These options are useful when you have, for example, a type and a value with the same name. If you do not specify prefixes, LATEX will complain about multiply defined labels.
The following options apply in conjunction with the -texi option:
The following options apply in conjunction with the -dot option:
The following options apply in conjunction with the -man option:
Information on a module can be extracted either from the .mli or .ml file, or both, depending on the files given on the command line. When both .mli and .ml files are given for the same module, information extracted from these files is merged according to the following rules:
The following rules must be respected in order to avoid name clashes resulting in cross-reference errors:
Comments containing documentation material are called special comments and are written between (** and *). Special comments must start exactly with (**. Comments beginning with ( and more than two * are ignored.
OCamldoc can associate comments to some elements of the language encountered in the source files. The association is made according to the locations of comments with respect to the language elements. The locations of comments in .mli and .ml files are different.
A special comment is associated to an element if it is placed before or
after the element.
A special comment before an element is associated to this element ifââ:
A special comment after an element is associated to this element if there is no blank line or comment between the special comment and the element.
There are two exceptions: for constructors and record fields in type definitions, the associated comment can only be placed after the constructor or field definition, without blank lines or other comments between them. The special comment for a constructor with another constructor following must be placed before the â|â character separating the two constructors.
The following sample interface file foo.mli illustrates the placement rules for comments in .mli files.
A special comment is associated to an element if it is placed before the element and there is no blank line between the comment and the element. Meanwhile, there can be a simple comment between the special comment and the element. There are two exceptions, for constructors and record fields in type definitions, whose associated comment must be placed after the constructor or field definition, without blank line between them. The special comment for a constructor with another constructor following must be placed before the â|â character separating the two constructors.
The following example of file toto.ml shows where to place comments in a .ml file.
The special comment (**/**) tells OCamldoc to discard elements placed after this comment, up to the end of the current class, class type, module or module type, or up to the next stop comment. For instance:
The -no-stop option to ocamldoc causes the Stop special comments to be ignored.
The inside of documentation comments (**âŠ*) consists of free-form text with optional formatting annotations, followed by optional tags giving more specific information about parameters, version, authors, ⊠The tags are distinguished by a leading @ character. Thus, a documentation comment has the following shape:
(** The comment begins with a description, which is text formatted according to the rules described in the next section. The description continues until the first non-escaped '@' character. @author Mr Smith @param x description for parameter x *)
Some elements support only a subset of all @-tags. Tags that are not relevant to the documented element are simply ignored. For instance, all tags are ignored when documenting type constructors, record fields, and class inheritance clauses. Similarly, a @param tag on a class instance variable is ignored.
At last, (**) is the empty documentation comment.
Here is the BNF grammar for the simple markup language used to format text descriptions.
|
|
text-element | ::= |
⣠| inline-text-element | |
⣠| blank-line | force a new line. |
⣠| { { 0 ⊠9 }+ inline-text } | format text as a section header; the integer following { indicates the sectioning level. |
⣠| { { 0 ⊠9 }+ : label inline-text } | same, but also associate the name label to the current point. This point can be referenced by its fully-qualified label in a {! command, just like any other element. |
⣠| {b inline-text } | set text in bold. |
⣠| {i inline-text } | set text in italic. |
⣠| {e inline-text } | emphasize text. |
⣠| {C inline-text } | center text. |
⣠| {L inline-text } | left align text. |
⣠| {R inline-text } | right align text. |
⣠| {ul list } | build a list. |
⣠| {ol list } | build an enumerated list. |
⣠| {{: string } inline-text } | put a link to the given address (given as string) on the given text. |
⣠| [ string ] | set the given string in source code style. |
⣠| {[ string ]} | set the given string in preformatted source code style. |
⣠| {v string v} | set the given string in verbatim style. |
⣠| {% string %} | target-specific content (LATEX code by default, see details in 19.2.4.4) |
⣠| {! string } | insert a cross-reference to an element (see section 19.2.4.2 for the syntax of cross-references). |
⣠| {{! string } inline-text } | insert a cross-reference with the given text. |
⣠| {!modules: string string ... } | insert an index table for the given module names. Used in HTML only. |
⣠| {!indexlist} | insert a table of links to the various indexes (types, values, modules, ...). Used in HTML only. |
⣠| {^ inline-text } | set text in superscript. |
⣠| {_ inline-text } | set text in subscript. |
⣠| escaped-string | typeset the given string as is; special characters (â{â, â}â, â[â, â]â and â@â) must be escaped by a â\â |
|
A shortcut syntax exists for lists and enumerated lists:
(** Here is a {b list} - item 1 - item 2 - item 3 The list is ended by the blank line.*)
is equivalent to:
(** Here is a {b list} {ul {- item 1} {- item 2} {- item 3}} The list is ended by the blank line.*)
The same shortcut is available for enumerated lists, using â+â instead of â-â. Note that only one list can be defined by this shortcut in nested lists.
Cross-references are fully qualified element names, as in the example {!Foo.Bar.t}. This is an ambiguous reference as it may designate a type name, a value name, a class name, etc. It is possible to make explicit the intended syntactic class, using {!type:Foo.Bar.t} to designate a type, and {!val:Foo.Bar.t} a value of the same name.
The list of possible syntactic class is as follows:
tag | syntactic class |
module: | module |
modtype: | module type |
class: | class |
classtype: | class type |
val: | value |
type: | type |
exception: | exception |
attribute: | attribute |
method: | class method |
section: | ocamldoc section |
const: | variant constructor |
recfield: | record field |
In the case of variant constructors or record field, the constructor or field name should be preceded by the name of the correspond type â to avoid the ambiguity of several types having the same constructor names. For example, the constructor Node of the type tree will be referenced as {!tree.Node} or {!const:tree.Node}, or possibly {!Mod1.Mod2.tree.Node} from outside the module.
In the description of a value, type, exception, module, module type, class or class type, the first sentence is sometimes used in indexes, or when just a part of the description is needed. The first sentence is composed of the first characters of the description, until
outside of the following text formatting : {ul list } , {ol list } , [ string ] , {[ string ]} , {v string v} , {% string %} , {! string } , {^ text } , {_ text } .
The content inside {%foo: ... %} is target-specific and will only be interpreted by the backend foo, and ignored by the others. The backends of the distribution are latex, html, texi and man. If no target is specified (syntax {% ... %}), latex is chosen by default. Custom generators may support their own target prefix.
The HTML tags <b>..</b>, <code>..</code>, <i>..</i>, <ul>..</ul>, <ol>..</ol>, <li>..</li>, <center>..</center> and <h[0-9]>..</h[0-9]> can be used instead of, respectively, {b ..} , [..] , {i ..} , {ul ..} , {ol ..} , {li ..} , {C ..} and {[0-9] ..}.
The following table gives the list of predefined @-tags, with their
syntax and meaning.
@author string | The author of the element. One author per @author tag. There may be several @author tags for the same element. |
@deprecated text | The text should describe when the element was deprecated, what to use as a replacement, and possibly the reason for deprecation. |
@param id text | Associate the given description (text) to the given parameter name id. This tag is used for functions, methods, classes and functors. |
@raise Exc text | Explain that the element may raise the exception Exc. |
@return text | Describe the return value and its possible values. This tag is used for functions and methods. |
@see < URL > text | Add a reference to the URL with the given text as comment. |
@see 'filename' text | Add a reference to the given file name (written between single quotes), with the given text as comment. |
@see "document-name" text | Add a reference to the given document name (written between double quotes), with the given text as comment. |
@since string | Indicate when the element was introduced. |
@before version text | Associate the given description (text) to the given version in order to document compatibility issues. |
@version string | The version number for the element. |
You can use custom tags in the documentation comments, but they will have no effect if the generator used does not handle them. To use a custom tag, for example foo, just put @foo with some text in your comment, as in:
(** My comment to show you a custom tag. @foo this is the text argument to the [foo] custom tag. *)
To handle custom tags, you need to define a custom generator, as explained in section 19.3.2.
OCamldoc operates in two steps:
Users can provide their own documentation generator to be used during step 2 instead of the default generators. All the information retrieved during the analysis step is available through the Odoc_info module, which gives access to all the types and functions representing the elements found in the given modules, with their associated description.
The files you can use to define custom generators are installed in the ocamldoc sub-directory of the OCaml standard library.
The type of a generator module depends on the kind of generated documentation. Here is the list of generator module types, with the name of the generator class in the moduleââ:
That is, to define a new generator, one must implement a module with the expected signature, and with the given generator class, providing the generate method as entry point to make the generator generates documentation for a given list of modulesââ:
method generate : Odoc_info.Module.t_module list -> unit
This method will be called with the list of analysed and possibly merged Odoc_info.t_module structures.
It is recommended to inherit from the current generator of the same kind as the one you want to define. Doing so, it is possible to load various custom generators to combine improvements brought by each one.
This is done using first class modules (see chapter 12.5).
The easiest way to define a custom generator is the following this example, here extending the current HTML generator. We donât have to know if this is the original HTML generator defined in ocamldoc or if it has been extended already by a previously loaded custom generatorââ:
module Generator (G : Odoc_html.Html_generator) = struct class html = object(self) inherit G.html as html (* ... *) method generate module_list = (* ... *) () (* ... *) end end;; let _ = Odoc_args.extend_html_generator (module Generator : Odoc_gen.Html_functor);;
To know which methods to override and/or which methods are available, have a look at the different base implementations, depending on the kind of generator you are extendingââ:
Making a custom generator handle custom tags (see 19.2.5) is very simple.
Here is how to develop a HTML generator handling your custom tags.
The class Odoc_html.Generator.html inherits from the class Odoc_html.info, containing a field tag_functions which is a list pairs composed of a custom tag (e.g. "foo") and a function taking a text and returning HTML code (of type string). To handle a new tag bar, extend the current HTML generator and complete the tag_functions field:
module Generator (G : Odoc_html.Html_generator) = struct class html = object(self) inherit G.html (** Return HTML code for the given text of a bar tag. *) method html_of_bar t = (* your code here *) initializer tag_functions <- ("bar", self#html_of_bar) :: tag_functions end end let _ = Odoc_args.extend_html_generator (module Generator : Odoc_gen.Html_functor);;
Another method of the class Odoc_html.info will look for the function associated to a custom tag and apply it to the text given to the tag. If no function is associated to a custom tag, then the method prints a warning message on stderr.
You can act the same way for other kinds of generators.
The command line analysis is performed after loading the module containing the documentation generator, thus allowing command line options to be added to the list of existing ones. Adding an option can be done with the function
Odoc_args.add_option : string * Arg.spec * string -> unit
Note: Existing command line options can be redefined using this function.
Let custom.ml be the file defining a new generator class. Compilation of custom.ml can be performed by the following commandââ:
ocamlc -I +ocamldoc -c custom.ml
The file custom.cmo is created and can be used this wayââ:
ocamldoc -g custom.cmo other-options source-files
Options selecting a built-in generator to ocamldoc, such as -html, have no effect if a custom generator of the same kind is provided using -g. If the kinds do not match, the selected built-in generator is used and the custom one is ignored.
It is possible to define a generator class in several modules, which are defined in several files file1.ml[i], file2.ml[i], ..., filen.ml[i]. A .cma library file must be created, including all these files.
The following commands create the custom.cma file from files file1.ml[i], ..., filen.ml[i]ââ:
ocamlc -I +ocamldoc -c file1.ml[i] ocamlc -I +ocamldoc -c file2.ml[i] ... ocamlc -I +ocamldoc -c filen.ml[i] ocamlc -o custom.cma -a file1.cmo file2.cmo ... filen.cmo
Then, the following command uses custom.cma as custom generator:
ocamldoc -g custom.cma other-options source-files
This chapter describes the OCaml source-level replay debugger ocamldebug.
Unix:â The debugger is available on Unix systems that provide BSD sockets.
Windows:â The debugger is available under the Cygwin port of OCaml, but not under the native Win32 ports.
Before the debugger can be used, the program must be compiled and linked with the -g option: all .cmo and .cma files that are part of the program should have been created with ocamlc -g, and they must be linked together with ocamlc -g.
Compiling with -g entails no penalty on the running time of programs: object files and bytecode executable files are bigger and take longer to produce, but the executable files run at exactly the same speed as if they had been compiled without -g.
The OCaml debugger is invoked by running the program ocamldebug with the name of the bytecode executable file as first argument:
ocamldebug [options] program [arguments]
The arguments following program are optional, and are passed as command-line arguments to the program being debugged. (See also the set arguments command.)
The following command-line options are recognized:
On start-up, the debugger will read commands from an initialization file before giving control to the user. The default file is .ocamldebug in the current directory if it exists, otherwise .ocamldebug in the userâs home directory.
The command quit exits the debugger. You can also exit the debugger by typing an end-of-file character (usually ctrl-D).
Typing an interrupt character (usually ctrl-C) will not exit the debugger, but will terminate the action of any debugger command that is in progress and return to the debugger command level.
A debugger command is a single line of input. It starts with a command name, which is followed by arguments depending on this name. Examples:
run goto 1000 set arguments arg1 arg2
A command name can be truncated as long as there is no ambiguity. For instance, go 1000 is understood as goto 1000, since there are no other commands whose name starts with go. For the most frequently used commands, ambiguous abbreviations are allowed. For instance, r stands for run even though there are others commands starting with r. You can test the validity of an abbreviation using the help command.
If the previous command has been successful, a blank line (typing just RET) will repeat it.
The OCaml debugger has a simple on-line help system, which gives a brief description of each command and variable.
Events are âinterestingâ locations in the source code, corresponding to the beginning or end of evaluation of âinterestingâ sub-expressions. Events are the unit of single-stepping (stepping goes to the next or previous event encountered in the program execution). Also, breakpoints can only be set at events. Thus, events play the role of line numbers in debuggers for conventional languages.
During program execution, a counter is incremented at each event encountered. The value of this counter is referred as the current time. Thanks to reverse execution, it is possible to jump back and forth to any time of the execution.
Here is where the debugger events (written â) are located in the source code:
(f arg)â
fun x y z -> â ...
function pat1 -> â expr1 | ... | patN -> â exprN
expr1; â expr2; â ...; â exprN
if cond then â expr1 else â expr2
while cond do â body done for i = a to b do â body done
Exceptions: A function application followed by a function return is replaced by the compiler by a jump (tail-call optimization). In this case, no event is put after the function application.
The debugger starts executing the debugged program only when needed. This allows setting breakpoints or assigning debugger variables before execution starts. There are several ways to start execution:
The execution of a program is affected by certain information it receives when the debugger starts it, such as the command-line arguments to the program and its working directory. The debugger provides commands to specify this information (set arguments and cd). These commands must be used before program execution starts. If you try to change the arguments or the working directory after starting your program, the debugger will kill the program (after asking for confirmation).
The following commands execute the program forward or backward, starting at the current time. The execution will stop either when specified by the command or when a breakpoint is encountered.
You can jump directly to a given time, without stopping on breakpoints, using the goto command.
As you move through the program, the debugger maintains an history of the successive times you stop at. The last command can be used to revisit these times: each last command moves one step back through the history. That is useful mainly to undo commands such as step and next.
A breakpoint causes the program to stop whenever a certain point in the program is reached. It can be set in several ways using the break command. Breakpoints are assigned numbers when set, for further reference. The most comfortable way to set breakpoints is through the Emacs interface (see sectionââ20.10).
Each time the program performs a function application, it saves the location of the application (the return address) in a block of data called a stack frame. The frame also contains the local variables of the caller function. All the frames are allocated in a region of memory called the call stack. The command backtrace (or bt) displays parts of the call stack.
At any time, one of the stack frames is âselectedâ by the debugger; several debugger commands refer implicitly to the selected frame. In particular, whenever you ask the debugger for the value of a local variable, the value is found in the selected frame. The commands frame, up and down select whichever frame you are interested in.
When the program stops, the debugger automatically selects the currently executing frame and describes it briefly as the frame command does.
The debugger can print the current value of simple expressions. The expressions can involve program variables: all the identifiers that are in scope at the selected program point can be accessed.
Expressions that can be printed are a subset of OCaml expressions, as described by the following grammar:
|
The first two cases refer to a value identifier, either unqualified or qualified by the path to the structure that define it. * refers to the result just computed (typically, the value of a function application), and is valid only if the selected event is an âafterâ event (typically, a function application). $ integer refer to a previously printed value. The remaining four forms select part of an expression: respectively, a record field, an array element, a string element, and the current contents of a reference.
When printing a complex expression, a name of the form $integer is automatically assigned to its value. Such names are also assigned to parts of the value that cannot be printed because the maximal printing depth is exceeded. Named values can be printed later on with the commands p $integer or d $integer. Named values are valid only as long as the program is stopped. They are forgotten as soon as the program resumes execution.
A shell is used to pass the arguments to the debugged program. You can therefore use wildcards, shell variables, and file redirections inside the arguments. To debug programs that read from standard input, it is recommended to redirect their input from a file (using set arguments < input-file), otherwise input to the program and input to the debugger are not properly separated, and inputs are not properly replayed when running the program backwards.
The loadingmode variable controls how the program is executed.
The debugger searches for source files and compiled interface files in a list of directories, the search path. The search path initially contains the current directory . and the standard library directory. The directory command adds directories to the path.
Whenever the search path is modified, the debugger will clear any information it may have cached about the files.
Each time a program is started in the debugger, it inherits its working directory from the current working directory of the debugger. This working directory is initially whatever it inherited from its parent process (typically the shell), but you can specify a new working directory in the debugger with the cd command or the -cd command-line option.
In some cases, you may want to turn reverse execution off. This speeds up the program execution, and is also sometimes useful for interactive programs.
Normally, the debugger takes checkpoints of the program state from time to time. That is, it makes a copy of the current state of the program (using the Unix system call fork). If the variable checkpoints is set to off, the debugger will not take any checkpoints.
When the program issues a call to fork, the debugger can either follow the child or the parent. By default, the debugger follows the parent process. The variable follow_fork_mode controls this behavior:
The debugger is compatible with the Dynlink module. However, when an external module is not yet loaded, it is impossible to set a breakpoint in its code. In order to facilitate setting breakpoints in dynamically loaded code, the debugger stops the program each time new modules are loaded. This behavior can be disabled using the break_on_load variable:
The debugger communicate with the program being debugged through a Unix socket. You may need to change the socket name, for example if you need to run the debugger on a machine and your program on another.
On the debugged program side, the socket name is passed through the CAML_DEBUG_SOCKET environment variable.
Several variables enables to fine-tune the debugger. Reasonable defaults are provided, and you should normally not have to change them.
As checkpointing is quite expensive, it must not be done too often. On the other hand, backward execution is faster when checkpoints are taken more often. In particular, backward single-stepping is more responsive when many checkpoints have been taken just before the current time. To fine-tune the checkpointing strategy, the debugger does not take checkpoints at the same frequency for long displacements (e.g. run) and small ones (e.g. step). The two variables bigstep and smallstep contain the number of events between two checkpoints in each case.
The following commands display information on checkpoints and events:
Just as in the toplevel system (sectionââ14.2), the user can register functions for printing values of certain types. For technical reasons, the debugger cannot call printing functions that reside in the program being debugged. The code for the printing functions must therefore be loaded explicitly in the debugger.
The value path printer-name must refer to one of the functions defined by the object files loaded using load_printer. It cannot reference the functions of the program being debugged.
The most user-friendly way to use the debugger is to run it under Emacs with the OCaml mode available through MELPA and also at https://github.com/ocaml/caml-mode.
The OCaml debugger is started under Emacs by the command M-x camldebug, with argument the name of the executable file progname to debug. Communication with the debugger takes place in an Emacs buffer named *camldebug-progname*. The editing and history facilities of Shell mode are available for interacting with the debugger.
In addition, Emacs displays the source files containing the current event (the current position in the program execution) and highlights the location of the event. This display is updated synchronously with the debugger action.
The following bindings for the most common debugger commands are available in the *camldebug-progname* buffer:
In all buffers in OCaml editing mode, the following debugger commands are also available:
This chapter describes how the execution of OCaml programs can be profiled, by recording how many times functions are called, branches of conditionals are taken, âŠ
Before profiling an execution, the program must be compiled in profiling mode, using the ocamlcp front-end to the ocamlc compiler (see chapterââ13) or the ocamloptp front-end to the ocamlopt compiler (see chapterââ16). When compiling modules separately, ocamlcp or ocamloptp must be used when compiling the modules (production of .cmo or .cmx files), and can also be used (though this is not strictly necessary) when linking them together.
If a module (.ml file) doesnât have a corresponding interface (.mli file), then compiling it with ocamlcp will produce object files (.cmi and .cmo) that are not compatible with the ones produced by ocamlc, which may lead to problems (if the .cmi or .cmo is still around) when switching between profiling and non-profiling compilations. To avoid this problem, you should always have a .mli file for each .ml file. The same problem exists with ocamloptp.
To make sure your programs can be compiled in profiling mode, avoid using any identifier that begins with __ocaml_prof.
The amount of profiling information can be controlled through the -P option to ocamlcp or ocamloptp, followed by one or several letters indicating which parts of the program should be profiled:
For instance, compiling with ocamlcp -P film profiles function calls, ifâŠthenâŠelseâŠ, loops and pattern matching.
Calling ocamlcp or ocamloptp without the -P option defaults to -P fm, meaning that only function calls and pattern matching are profiled.
For compatibility with previous releases, ocamlcp also accepts the -p option, with the same arguments and behaviour as -P.
The ocamlcp and ocamloptp commands also accept all the options of the corresponding ocamlc or ocamlopt compiler, except the -pp (preprocessing) option.
Running an executable that has been compiled with ocamlcp or ocamloptp records the execution counts for the specified parts of the program and saves them in a file called ocamlprof.dump in the current directory.
If the environment variable OCAMLPROF_DUMP is set when the program exits, its value is used as the file name instead of ocamlprof.dump.
The dump file is written only if the program terminates normally (by calling exit or by falling through). It is not written if the program terminates with an uncaught exception.
If a compatible dump file already exists in the current directory, then the profiling information is accumulated in this dump file. This allows, for instance, the profiling of several executions of a program on different inputs. Note that dump files produced by byte-code executables (compiled with ocamlcp) are compatible with the dump files produced by native executables (compiled with ocamloptp).
The ocamlprof command produces a source listing of the program modules where execution counts have been inserted as comments. For instance,
ocamlprof foo.ml
prints the source code for the foo module, with comments indicating how many times the functions in this module have been called. Naturally, this information is accurate only if the source file has not been modified after it was compiled.
The following options are recognized by ocamlprof:
Profiling with ocamlprof only records execution counts, not the actual time spent within each function. There is currently no way to perform time profiling on bytecode programs generated by ocamlc. For time profiling of native code, users are recommended to use standard tools such as perf (on Linux), Instruments (on macOS) and DTrace. Profiling with gprof is no longer supported.
This chapter describes how user-defined primitives, written in C, can be linked with OCaml code and called from OCaml functions, and how these C functions can call back to OCaml code.
|
User primitives are declared in an implementation file or structâŠend module expression using the external keyword:
external name : type = C-function-name
This defines the value name name as a function with type type that executes by calling the given C function. For instance, here is how the seek_in primitive is declared in the standard library module Stdlib:
external seek_in : in_channel -> int -> unit = "caml_ml_seek_in"
Primitives with several arguments are always curried. The C function does not necessarily have the same name as the ML function.
External functions thus defined can be specified in interface files or sigâŠend signatures either as regular values
val name : type
thus hiding their implementation as C functions, or explicitly as âmanifestâ external functions
external name : type = C-function-name
The latter is slightly more efficient, as it allows clients of the module to call directly the C function instead of going through the corresponding OCaml function. On the other hand, it should not be used in library modules if they have side-effects at toplevel, as this direct call interferes with the linkerâs algorithm for removing unused modules from libraries at link-time.
The arity (number of arguments) of a primitive is automatically determined from its OCaml type in the external declaration, by counting the number of function arrows in the type. For instance, seek_in above has arity 2, and the caml_ml_seek_in C function is called with two arguments. Similarly,
external seek_in_pair: in_channel * int -> unit = "caml_ml_seek_in_pair"
has arity 1, and the caml_ml_seek_in_pair C function receives one argument (which is a pair of OCaml values).
Type abbreviations are not expanded when determining the arity of a primitive. For instance,
type int_endo = int -> int external f : int_endo -> int_endo = "f" external g : (int -> int) -> (int -> int) = "f"
f has arity 1, but g has arity 2. This allows a primitive to return a functional value (as in the f example above): just remember to name the functional return type in a type abbreviation.
The language accepts external declarations with one or two flag strings in addition to the C functionâs name. These flags are reserved for the implementation of the standard library.
User primitives with arity n †5 are implemented by C functions that take n arguments of type value, and return a result of type value. The type value is the type of the representations for OCaml values. It encodes objects of several base types (integers, floating-point numbers, strings,âââŠ) as well as OCaml data structures. The type value and the associated conversion functions and macros are described in detail below. For instance, here is the declaration for the C function implementing the In_channel.input primitive, which takes 4 arguments:
CAMLprim value input(value channel, value buffer, value offset, value length) { ... }
When the primitive function is applied in an OCaml program, the C function is called with the values of the expressions to which the primitive is applied as arguments. The value returned by the function is passed back to the OCaml program as the result of the function application.
User primitives with arity greater than 5 should be implemented by two C functions. The first function, to be used in conjunction with the bytecode compiler ocamlc, receives two arguments: a pointer to an array of OCaml values (the values for the arguments), and an integer which is the number of arguments provided. The other function, to be used in conjunction with the native-code compiler ocamlopt, takes its arguments directly. For instance, here are the two C functions for the 7-argument primitive Nat.add_nat:
CAMLprim value add_nat_native(value nat1, value ofs1, value len1, value nat2, value ofs2, value len2, value carry_in) { ... } CAMLprim value add_nat_bytecode(value * argv, int argn) { return add_nat_native(argv[0], argv[1], argv[2], argv[3], argv[4], argv[5], argv[6]); }
The names of the two C functions must be given in the primitive declaration, as follows:
external name : type = bytecode-C-function-name native-code-C-function-name
For instance, in the case of add_nat, the declaration is:
external add_nat: nat -> int -> int -> nat -> int -> int -> int -> int = "add_nat_bytecode" "add_nat_native"
Implementing a user primitive is actually two separate tasks: on the one hand, decoding the arguments to extract C values from the given OCaml values, and encoding the return value as an OCaml value; on the other hand, actually computing the result from the arguments. Except for very simple primitives, it is often preferable to have two distinct C functions to implement these two tasks. The first function actually implements the primitive, taking native C values as arguments and returning a native C value. The second function, often called the âstub codeâ, is a simple wrapper around the first function that converts its arguments from OCaml values to C values, calls the first function, and converts the returned C value to an OCaml value. For instance, here is the stub code for the Int64.float_of_bits primitive:
CAMLprim value caml_int64_float_of_bits(value vi) { return caml_copy_double(caml_int64_float_of_bits_unboxed(Int64_val(vi))); }
(Here, caml_copy_double and Int64_val are conversion functions and macros for the type value, that will be described later. The CAMLprim macro expands to the required compiler directives to ensure that the function is exported and accessible from OCaml.) The hard work is performed by the function caml_int64_float_of_bits_unboxed, which is declared as:
double caml_int64_float_of_bits_unboxed(int64_t i) { ... }
To write C code that operates on OCaml values, the following include files are provided:
Include file | Provides |
caml/mlvalues.h | definition of the value type, and conversion macros |
caml/alloc.h | allocation functions (to create structured OCaml objects) |
caml/memory.h | miscellaneous memory-related functions and macros (for GC interface, in-place modification of structures, etc). |
caml/fail.h | functions for raising exceptions (see sectionââ22.4.5) |
caml/callback.h | callback from C to OCaml (see sectionââ22.7). |
caml/custom.h | operations on custom blocks (see sectionââ22.9). |
caml/intext.h | operations for writing user-defined serialization and deserialization functions for custom blocks (see sectionââ22.9). |
caml/threads.h | operations for interfacing in the presence of multiple threads (see sectionââ22.12). |
These files reside in the caml/ subdirectory of the OCaml standard library directory, which is returned by the command ocamlc -where (usually /usr/local/lib/ocaml or /usr/lib/ocaml).
The OCaml runtime system comprises three main parts: the bytecode interpreter, the memory manager, and a set of C functions that implement the primitive operations. Some bytecode instructions are provided to call these C functions, designated by their offset in a table of functions (the table of primitives).
In the default mode, the OCaml linker produces bytecode for the standard runtime system, with a standard set of primitives. References to primitives that are not in this standard set result in the âunavailable C primitiveâ error. (Unless dynamic loading of C libraries is supported â see sectionââ22.1.4 below.)
In the âcustom runtimeâ mode, the OCaml linker scans the object files and determines the set of required primitives. Then, it builds a suitable runtime system, by calling the native code linker with:
This builds a runtime system with the required primitives. The OCaml linker generates bytecode for this custom runtime system. The bytecode is appended to the end of the custom runtime system, so that it will be automatically executed when the output file (custom runtime + bytecode) is launched.
To link in âcustom runtimeâ mode, execute the ocamlc command with:
If you are using the native-code compiler ocamlopt, the -custom flag is not needed, as the final linking phase of ocamlopt always builds a standalone executable. To build a mixed OCaml/C executable, execute the ocamlopt command with:
Starting with Objective Caml 3.00, it is possible to record the -custom option as well as the names of C libraries in an OCaml library file .cma or .cmxa. For instance, consider an OCaml library mylib.cma, built from the OCaml object files a.cmo and b.cmo, which reference C code in libmylib.a. If the library is built as follows:
ocamlc -a -o mylib.cma -custom a.cmo b.cmo -cclib -lmylib
users of the library can simply link with mylib.cma:
ocamlc -o myprog mylib.cma ...
and the system will automatically add the -custom and -cclib -lmylib options, achieving the same effect as
ocamlc -o myprog -custom a.cmo b.cmo ... -cclib -lmylib
The alternative is of course to build the library without extra options:
ocamlc -a -o mylib.cma a.cmo b.cmo
and then ask users to provide the -custom and -cclib -lmylib options themselves at link-time:
ocamlc -o myprog -custom mylib.cma ... -cclib -lmylib
The former alternative is more convenient for the final users of the library, however.
Starting with Objective Caml 3.03, an alternative to static linking of C code using the -custom code is provided. In this mode, the OCaml linker generates a pure bytecode executable (no embedded custom runtime system) that simply records the names of dynamically-loaded libraries containing the C code. The standard OCaml runtime system ocamlrun then loads dynamically these libraries, and resolves references to the required primitives, before executing the bytecode.
This facility is currently available on all platforms supported by OCaml except Cygwin 64 bits.
To dynamically link C code with OCaml code, the C code must first be compiled into a shared library (under Unix) or DLL (under Windows). This involves 1- compiling the C files with appropriate C compiler flags for producing position-independent code (when required by the operating system), and 2- building a shared library from the resulting object files. The resulting shared library or DLL file must be installed in a place where ocamlrun can find it later at program start-up time (see sectionââ15.3). Finally (step 3), execute the ocamlc command with
Do not set the -custom flag, otherwise youâre back to static linking as described in sectionââ22.1.3. The ocamlmklib tool (see sectionââ22.14) automates steps 2 and 3.
As in the case of static linking, it is possible (and recommended) to record the names of C libraries in an OCaml .cma library archive. Consider again an OCaml library mylib.cma, built from the OCaml object files a.cmo and b.cmo, which reference C code in dllmylib.so. If the library is built as follows:
ocamlc -a -o mylib.cma a.cmo b.cmo -dllib -lmylib
users of the library can simply link with mylib.cma:
ocamlc -o myprog mylib.cma ...
and the system will automatically add the -dllib -lmylib option, achieving the same effect as
ocamlc -o myprog a.cmo b.cmo ... -dllib -lmylib
Using this mechanism, users of the library mylib.cma do not need to know that it references C code, nor whether this C code must be statically linked (using -custom) or dynamically linked.
After having described two different ways of linking C code with OCaml code, we now review the pros and cons of each, to help developers of mixed OCaml/C libraries decide.
The main advantage of dynamic linking is that it preserves the platform-independence of bytecode executables. That is, the bytecode executable contains no machine code, and can therefore be compiled on platform A and executed on other platforms B, C, âŠ, as long as the required shared libraries are available on all these platforms. In contrast, executables generated by ocamlc -custom run only on the platform on which they were created, because they embark a custom-tailored runtime system specific to that platform. In addition, dynamic linking results in smaller executables.
Another advantage of dynamic linking is that the final users of the library do not need to have a C compiler, C linker, and C runtime libraries installed on their machines. This is no big deal under Unix and Cygwin, but many Windows users are reluctant to install Microsoft Visual C just to be able to do ocamlc -custom.
There are two drawbacks to dynamic linking. The first is that the resulting executable is not stand-alone: it requires the shared libraries, as well as ocamlrun, to be installed on the machine executing the code. If you wish to distribute a stand-alone executable, it is better to link it statically, using ocamlc -custom -ccopt -static or ocamlopt -ccopt -static. Dynamic linking also raises the âDLL hellâ problem: some care must be taken to ensure that the right versions of the shared libraries are found at start-up time.
The second drawback of dynamic linking is that it complicates the construction of the library. The C compiler and linker flags to compile to position-independent code and build a shared library vary wildly between different Unix systems. Also, dynamic linking is not supported on all Unix systems, requiring a fall-back case to static linking in the Makefile for the library. The ocamlmklib command (see sectionââ22.14) tries to hide some of these system dependencies.
In conclusion: dynamic linking is highly recommended under the native Windows port, because there are no portability problems and it is much more convenient for the end users. Under Unix, dynamic linking should be considered for mature, frequently used libraries because it enhances platform-independence of bytecode executables. For new or rarely-used libraries, static linking is much simpler to set up in a portable way.
It is sometimes inconvenient to build a custom runtime system each time OCaml code is linked with C libraries, like ocamlc -custom does. For one thing, the building of the runtime system is slow on some systems (that have bad linkers or slow remote file systems); for another thing, the platform-independence of bytecode files is lost, forcing to perform one ocamlc -custom link per platform of interest.
An alternative to ocamlc -custom is to build separately a custom runtime system integrating the desired C libraries, then generate âpureâ bytecode executables (not containing their own runtime system) that can run on this custom runtime. This is achieved by the -make-runtime and -use-runtime flags to ocamlc. For example, to build a custom runtime system integrating the C parts of the âUnixâ and âThreadsâ libraries, do:
ocamlc -make-runtime -o /home/me/ocamlunixrun unix.cma threads.cma
To generate a bytecode executable that runs on this runtime system, do:
ocamlc -use-runtime /home/me/ocamlunixrun -o myprog \
unix.cma threads.cma your .cmo and .cma files
The bytecode executable myprog can then be launched as usual: myprog args or /home/me/ocamlunixrun myprog args.
Notice that the bytecode libraries unix.cma and threads.cma must be given twice: when building the runtime system (so that ocamlc knows which C primitives are required) and also when building the bytecode executable (so that the bytecode from unix.cma and threads.cma is actually linked in).
All OCaml objects are represented by the C type value, defined in the include file caml/mlvalues.h, along with macros to manipulate values of that type. An object of type value is either:
caml_alloc_*
functions described
in sectionââ22.4.4.
Integer values encode 63-bit signed integers (31-bit on 32-bit architectures). They are unboxed (unallocated).
Blocks in the heap are garbage-collected, and therefore have strict structure constraints. Each block includes a header containing the size of the block (in words), and the tag of the block. The tag governs how the contents of the blocks are structured. A tag lower than No_scan_tag indicates a structured block, containing well-formed values, which is recursively traversed by the garbage collector. A tag greater than or equal to No_scan_tag indicates a raw block, whose contents are not scanned by the garbage collector. For the benefit of ad-hoc polymorphic primitives such as equality and structured input-output, structured and raw blocks are further classified according to their tags as follows:
Tag | Contents of the block |
0 to No_scan_tagâ1 | A structured block (an array of OCaml objects). Each field is a value. |
Closure_tag | A closure representing a functional value. The first word is a pointer to a piece of code, the remaining words are value containing the environment. |
String_tag | A character string or a byte sequence. |
Double_tag | A double-precision floating-point number. |
Double_array_tag | An array or record of double-precision floating-point numbers. |
Abstract_tag | A block representing an abstract datatype. |
Custom_tag | A block representing an abstract datatype with user-defined finalization, comparison, hashing, serialization and deserialization functions attached. |
In earlier versions of OCaml, it was possible to use word-aligned pointers to addresses outside the heap as OCaml values, just by casting the pointer to type value. This usage is no longer supported since OCaml 5.0.
A correct way to manipulate pointers to out-of-heap blocks from OCaml is to store those pointers in OCaml blocks with tag Abstract_tag or Custom_tag, then use the blocks as the OCaml values.
Here is an example of encapsulation of out-of-heap pointers of C type ty * inside Abstract_tag blocks. Sectionââ22.6 gives a more complete example using Custom_tag blocks.
/* Create an OCaml value encapsulating the pointer p */ static value val_of_typtr(ty * p) { value v = caml_alloc(1, Abstract_tag); *((ty **) Data_abstract_val(v)) = p; return v; } /* Extract the pointer encapsulated in the given OCaml value */ static ty * typtr_of_val(value v) { return *((ty **) Data_abstract_val(v)); }
Alternatively, out-of-heap pointers can be treated as ânativeâ integers, that is, boxed 32-bit integers on a 32-bit platform and boxed 64-bit integers on a 64-bit platform.
/* Create an OCaml value encapsulating the pointer p */ static value val_of_typtr(ty * p) { return caml_copy_nativeint((intnat) p); } /* Extract the pointer encapsulated in the given OCaml value */ static ty * typtr_of_val(value v) { return (ty *) Nativeint_val(v); }
For pointers that are at least 2-aligned (the low bit is guaranteed to be zero), we have yet another valid representation as an OCaml tagged integer.
/* Create an OCaml value encapsulating the pointer p */ static value val_of_typtr(ty * p) { assert (((uintptr_t) p & 1) == 0); /* check correct alignment */ return (value) p | 1; } /* Extract the pointer encapsulated in the given OCaml value */ static ty * typtr_of_val(value v) { return (ty *) (v & ~1); }
This section describes how OCaml data types are encoded in the value type.
OCaml type | Encoding |
int | Unboxed integer values. |
char | Unboxed integer values (ASCII code). |
float | Blocks with tag Double_tag. |
bytes | Blocks with tag String_tag. |
string | Blocks with tag String_tag. |
int32 | Blocks with tag Custom_tag. |
int64 | Blocks with tag Custom_tag. |
nativeint | Blocks with tag Custom_tag. |
Tuples are represented by pointers to blocks, with tagââ0.
Records are also represented by zero-tagged blocks. The ordering of labels in the record type declaration determines the layout of the record fields: the value associated to the label declared first is stored in fieldââ0 of the block, the value associated to the second label goes in fieldââ1, and so on.
As an optimization, records whose fields all have static type float are represented as arrays of floating-point numbers, with tag Double_array_tag. (See the section below on arrays.)
As another optimization, unboxable record types are represented specially; unboxable record types are the immutable record types that have only one field. An unboxable type will be represented in one of two ways: boxed or unboxed. Boxed record types are represented as described above (by a block with tag 0 or Double_array_tag). An unboxed record type is represented directly by the value of its field (i.e. there is no block to represent the record itself).
The representation is chosen according to the following, in decreasing order of priority:
Arrays of integers and pointers are represented like tuples, that is, as pointers to blocks taggedââ0. They are accessed with the Field macro for reading and the caml_modify function for writing.
Arrays of floating-point numbers (type float array) have a special, unboxed, more efficient representation. These arrays are represented by pointers to blocks with tag Double_array_tag. They should be accessed with the Double_field and Store_double_field macros.
Constructed terms are represented either by unboxed integers (for constant constructors) or by blocks whose tag encode the constructor (for non-constant constructors). The constant constructors and the non-constant constructors for a given concrete type are numbered separately, starting from 0, in the order in which they appear in the concrete type declaration. A constant constructor is represented by the unboxed integer equal to its constructor number. A non-constant constructor declared with n arguments is represented by a block of size n, tagged with the constructor number; the n fields contain its arguments. Example:
Constructed term | Representation |
() | Val_int(0) |
false | Val_int(0) |
true | Val_int(1) |
[] | Val_int(0) |
h::t | Block with size = 2 and tag = 0; first field contains h, second field t. |
As a convenience, caml/mlvalues.h defines the macros Val_unit, Val_false and Val_true to refer to (), false and true.
The following example illustrates the assignment of integers and block tags to constructors:
type t = | A (* First constant constructor -> integer "Val_int(0)" *) | B of string (* First non-constant constructor -> block with tag 0 *) | C (* Second constant constructor -> integer "Val_int(1)" *) | D of bool (* Second non-constant constructor -> block with tag 1 *) | E of t * t (* Third non-constant constructor -> block with tag 2 *)
As an optimization, unboxable concrete data types are represented specially; a concrete data type is unboxable if it has exactly one constructor and this constructor has exactly one argument. Unboxable concrete data types are represented in the same ways as unboxable record types: see the description in sectionââ22.3.2.
Objects are represented as blocks with tag Object_tag. The first field of the block refers to the objectâs class and associated method suite, in a format that cannot easily be exploited from C. The second field contains a unique object ID, used for comparisons. The remaining fields of the object contain the values of the instance variables of the object. It is unsafe to access directly instance variables, as the type system provides no guarantee about the instance variables contained by an object.
One may extract a public method from an object using the C function caml_get_public_method (declared in <caml/mlvalues.h>.) Since public method tags are hashed in the same way as variant tags, and methods are functions taking self as first argument, if you want to do the method call foo#bar from the C side, you should call:
callback(caml_get_public_method(foo, hash_variant("bar")), foo);
Like constructed terms, polymorphic variant values are represented either as integers (for polymorphic variants without argument), or as blocks (for polymorphic variants with an argument). Unlike constructed terms, variant constructors are not numbered starting from 0, but identified by a hash value (an OCaml integer), as computed by the C function hash_variant (declared in <caml/mlvalues.h>): the hash value for a variant constructor named, say, VConstr is hash_variant("VConstr").
The variant value `VConstr is represented by hash_variant("VConstr"). The variant value `VConstr(v) is represented by a block of size 2 and tag 0, with field number 0 containing hash_variant("VConstr") and field number 1 containing v.
Unlike constructed values, polymorphic variant values taking several arguments are not flattened. That is, `VConstr(v, w) is represented by a block of size 2, whose field number 1 contains the representation of the pair (v, w), rather than a block of size 3 containing v and w in fields 1 and 2.
The expressions Field(v, n), Byte(v, n) and Byte_u(v, n) are valid l-values. Hence, they can be assigned to, resulting in an in-place modification of value v. Assigning directly to Field(v, n) must be done with care to avoid confusing the garbage collector (see below).
The following functions are slightly more efficient than caml_alloc, but also much more difficult to use.
From the standpoint of the allocation functions, blocks are divided
according to their size as zero-sized blocks, small blocks (with size
less than or equal to Max_young_wosize
), and large blocks (with
size greater than Max_young_wosize
). The constant
Max_young_wosize
is declared in the include file mlvalues.h. It
is guaranteed to be at least 64 (words), so that any block with
constant size less than or equal to 64 can be assumed to be small. For
blocks whose size is computed at run-time, the size must be compared
against Max_young_wosize
to determine the correct allocation procedure.
Max_young_wosize
. (It
can also be smaller, but in this case it is more efficient to call
caml_alloc_small instead of caml_alloc_shr.)
If this block is a structured block (i.e. if t < No_scan_tag), then
the fields of the block (initially containing garbage) must be initialized
with legal values (using the caml_initialize function described below)
before the next allocation.
Two functions are provided to raise two standard exceptions:
char *
), raises exception Failure with argument s.
char *
), raises exception Invalid_argument
with argument s.
Raising arbitrary exceptions from C is more delicate: the exception identifier is dynamically allocated by the OCaml program, and therefore must be communicated to the C function using the registration facility described below in sectionââ22.7.3. Once the exception identifier is recovered in C, the following functions actually raise the exception:
Unused blocks in the heap are automatically reclaimed by the garbage collector. This requires some cooperation from C code that manipulates heap-allocated blocks.
All the macros described in this section are declared in the memory.h header file.
There are six CAMLparam macros: CAMLparam0 to CAMLparam5, which take zero to five arguments respectively. If your function has no more than 5 parameters of type value, use the corresponding macros with these parameters as arguments. If your function has more than 5 parameters of type value, use CAMLparam5 with five of these parameters, and use one or more calls to the CAMLxparam macros for the remaining parameters (CAMLxparam1 to CAMLxparam5).
The macros CAMLreturn, CAMLreturn0, and CAMLreturnT are used to replace the C keyword return. Every occurrence of return x must be replaced by CAMLreturn (x) if x has type value, or CAMLreturnT (t, x) (where t is the type of x); every occurrence of return without argument must be replaced by CAMLreturn0. If your C function is a procedure (i.e. if it returns void), you must insert CAMLreturn0 at the end (to replace Câs implicit return).
some C compilers give bogus warnings about unused variables caml__dummy_xxx at each use of CAMLparam and CAMLlocal. You should ignore them.
Example:
void foo (value v1, value v2, value v3) { CAMLparam3 (v1, v2, v3); ... CAMLreturn0; }
if your function is a primitive with more than 5 arguments for use with the byte-code runtime, its arguments are not values and must not be declared (they have types value * and int).
The macros CAMLlocal1 to CAMLlocal5 declare and initialize one to five local variables of type value. The variable names are given as arguments to the macros. CAMLlocalN(x, n) declares and initializes a local variable of type value [n]. You can use several calls to these macros if you have more than 5 local variables.
Example:
CAMLprim value bar (value v1, value v2, value v3) { CAMLparam3 (v1, v2, v3); CAMLlocal1 (result); result = caml_alloc (3, 0); ... CAMLreturn (result); }
Store_field (b, n, v) stores the value v in the field number n of value b, which must be a block (i.e. Is_block(b) must be true).
Example:
CAMLprim value bar (value v1, value v2, value v3) { CAMLparam3 (v1, v2, v3); CAMLlocal1 (result); result = caml_alloc (3, 0); Store_field (result, 0, v1); Store_field (result, 1, v2); Store_field (result, 2, v3); CAMLreturn (result); }
The first argument of Store_field and Store_double_field must be a variable declared by CAMLparam* or a parameter declared by CAMLlocal* to ensure that a garbage collection triggered by the evaluation of the other arguments will not invalidate the first argument after it is computed.
Arrays of values declared using CAMLlocalN must not be written to using Store_field. Use the normal C array syntax instead.
The same is true for any memory location outside the OCaml heap that contains a value and is not guaranteed to be reachableâfor as long as it contains such valueâfrom either another registered global variable or location, local variable declared with CAMLlocal or function parameter declared with CAMLparam.
Registration of a global variable v is achieved by calling caml_register_global_root(&v) just before or just after a valid value is stored in v for the first time; likewise, registration of an arbitrary location p is achieved by calling caml_register_global_root(p).
You must not call any of the OCaml runtime functions or macros between registering and storing the value. Neither must you store anything in the variable v (likewise, the location p) that is not a valid value.
The registration causes the contents of the variable or memory location to be updated by the garbage collector whenever the value in such variable or location is moved within the OCaml heap. In the presence of threads care must be taken to ensure appropriate synchronisation with the OCaml runtime to avoid a race condition against the garbage collector when reading or writing the value. (See section 22.12.2.)
A registered global variable v can be un-registered by calling caml_remove_global_root(&v).
If the contents of the global variable v are seldom modified after registration, better performance can be achieved by calling caml_register_generational_global_root(&v) to register v (after its initialization with a valid value, but before any allocation or call to the GC functions), and caml_remove_generational_global_root(&v) to un-register it. In this case, you must not modify the value of v directly, but you must use caml_modify_generational_global_root(&v,x) to set it to x. The garbage collector takes advantage of the guarantee that v is not modified between calls to caml_modify_generational_global_root to scan it less often. This improves performance if the modifications of v happen less often than minor collections.
The CAML macros use identifiers (local variables, type identifiers, structure tags) that start with caml__. Do not use any identifier starting with caml__ in your programs.
We now give the GC rules corresponding to the low-level allocation functions caml_alloc_small and caml_alloc_shr. You can ignore those rules if you stick to the simplified allocation function caml_alloc.
Field(v, n) = vn;If the block has been allocated with caml_alloc_shr, filling is performed through the caml_initialize function:
caml_initialize(&Field(v, n), vn);
The next allocation can trigger a garbage collection. The garbage collector assumes that all structured blocks contain well-formed values. Newly created blocks contain random data, which generally do not represent well-formed values.
If you really need to allocate before the fields can receive their final value, first initialize with a constant value (e.g. Val_unit), then allocate, then modify the fields with the correct value (see ruleââ6).
Field(v, n) = w;is safe only if v is a block newly allocated by caml_alloc_small; that is, if no allocation took place between the allocation of v and the assignment to the field. In all other cases, never assign directly. If the block has just been allocated by caml_alloc_shr, use caml_initialize to assign a value to a field for the first time:
caml_initialize(&Field(v, n), w);Otherwise, you are updating a field that previously contained a well-formed value; then, call the caml_modify function:
caml_modify(&Field(v, n), w);
To illustrate the rules above, here is a C function that builds and returns a list containing the two integers given as parameters. First, we write it using the simplified allocation functions:
value alloc_list_int(int i1, int i2) { CAMLparam0 (); CAMLlocal2 (result, r); r = caml_alloc(2, 0); /* Allocate a cons cell */ Store_field(r, 0, Val_int(i2)); /* car = the integer i2 */ Store_field(r, 1, Val_int(0)); /* cdr = the empty list [] */ result = caml_alloc(2, 0); /* Allocate the other cons cell */ Store_field(result, 0, Val_int(i1)); /* car = the integer i1 */ Store_field(result, 1, r); /* cdr = the first cons cell */ CAMLreturn (result); }
Here, the registering of result is not strictly needed, because no allocation takes place after it gets its value, but itâs easier and safer to simply register all the local variables that have type value.
Here is the same function written using the low-level allocation functions. We notice that the cons cells are small blocks and can be allocated with caml_alloc_small, and filled by direct assignments on their fields.
value alloc_list_int(int i1, int i2) { CAMLparam0 (); CAMLlocal2 (result, r); r = caml_alloc_small(2, 0); /* Allocate a cons cell */ Field(r, 0) = Val_int(i2); /* car = the integer i2 */ Field(r, 1) = Val_int(0); /* cdr = the empty list [] */ result = caml_alloc_small(2, 0); /* Allocate the other cons cell */ Field(result, 0) = Val_int(i1); /* car = the integer i1 */ Field(result, 1) = r; /* cdr = the first cons cell */ CAMLreturn (result); }
In the two examples above, the list is built bottom-up. Here is an alternate way, that proceeds top-down. It is less efficient, but illustrates the use of caml_modify.
value alloc_list_int(int i1, int i2) { CAMLparam0 (); CAMLlocal2 (tail, r); r = caml_alloc_small(2, 0); /* Allocate a cons cell */ Field(r, 0) = Val_int(i1); /* car = the integer i1 */ Field(r, 1) = Val_int(0); /* A dummy value tail = caml_alloc_small(2, 0); /* Allocate the other cons cell */ Field(tail, 0) = Val_int(i2); /* car = the integer i2 */ Field(tail, 1) = Val_int(0); /* cdr = the empty list [] */ caml_modify(&Field(r, 1), tail); /* cdr of the result = tail */ CAMLreturn (r); }
It would be incorrect to perform Field(r, 1) = tail directly, because the allocation of tail has taken place since r was allocated.
Since 4.10, allocation functions are guaranteed not to call any OCaml callbacks from C, including finalisers and signal handlers, and delay their execution instead.
The function caml_process_pending_actions
from
<caml/signals.h> executes any pending signal handlers and
finalisers, Memprof callbacks, and requested minor and major garbage
collections. In particular, it can raise asynchronous exceptions. It
is recommended to call it regularly at safe points inside long-running
non-blocking C code.
The variant caml_process_pending_actions_exn
is provided, that
returns the exception instead of raising it directly into OCaml code.
Its result must be tested using Is_exception_result, and
followed by Extract_exception if appropriate. It is typically
used for clean up before re-raising:
CAMLlocal1(exn); ... exn = caml_process_pending_actions_exn(); if(Is_exception_result(exn)) { exn = Extract_exception(exn); ...cleanup... caml_raise(exn); }
Correct use of exceptional return, in particular in the presence of garbage collection, is further detailed in Sectionââ22.7.1.
This section outlines how the functions from the Unix curses library can be made available to OCaml programs. First of all, here is the interface curses.ml that declares the curses primitives and data types:
(* File curses.ml -- declaration of primitives and data types *) type window (* The type "window" remains abstract *) external initscr: unit -> window = "caml_curses_initscr" external endwin: unit -> unit = "caml_curses_endwin" external refresh: unit -> unit = "caml_curses_refresh" external wrefresh : window -> unit = "caml_curses_wrefresh" external newwin: int -> int -> int -> int -> window = "caml_curses_newwin" external addch: char -> unit = "caml_curses_addch" external mvwaddch: window -> int -> int -> char -> unit = "caml_curses_mvwaddch" external addstr: string -> unit = "caml_curses_addstr" external mvwaddstr: window -> int -> int -> string -> unit = "caml_curses_mvwaddstr" (* lots more omitted *)
To compile this interface:
ocamlc -c curses.ml
To implement these functions, we just have to provide the stub code; the core functions are already implemented in the curses library. The stub code file, curses_stubs.c, looks like this:
/* File curses_stubs.c -- stub code for curses */ #include <curses.h> #include <caml/mlvalues.h> #include <caml/memory.h> #include <caml/alloc.h> #include <caml/custom.h> /* Encapsulation of opaque window handles (of type WINDOW *) as OCaml custom blocks. */ static struct custom_operations curses_window_ops = { "fr.inria.caml.curses_windows", custom_finalize_default, custom_compare_default, custom_hash_default, custom_serialize_default, custom_deserialize_default, custom_compare_ext_default, custom_fixed_length_default }; /* Accessing the WINDOW * part of an OCaml custom block */ #define Window_val(v) (*((WINDOW **) Data_custom_val(v))) /* Allocating an OCaml custom block to hold the given WINDOW * */ static value alloc_window(WINDOW * w) { value v = caml_alloc_custom(&curses_window_ops, sizeof(WINDOW *), 0, 1); Window_val(v) = w; return v; } CAMLprim value caml_curses_initscr(value unit) { CAMLparam1 (unit); CAMLreturn (alloc_window(initscr())); } CAMLprim value caml_curses_endwin(value unit) { CAMLparam1 (unit); endwin(); CAMLreturn (Val_unit); } CAMLprim value caml_curses_refresh(value unit) { CAMLparam1 (unit); refresh(); CAMLreturn (Val_unit); } CAMLprim value caml_curses_wrefresh(value win) { CAMLparam1 (win); wrefresh(Window_val(win)); CAMLreturn (Val_unit); } CAMLprim value caml_curses_newwin(value nlines, value ncols, value x0, value y0) { CAMLparam4 (nlines, ncols, x0, y0); CAMLreturn (alloc_window(newwin(Int_val(nlines), Int_val(ncols), Int_val(x0), Int_val(y0)))); } CAMLprim value caml_curses_addch(value c) { CAMLparam1 (c); addch(Int_val(c)); /* Characters are encoded like integers */ CAMLreturn (Val_unit); } CAMLprim value caml_curses_mvwaddch(value win, value x, value y, value c) { CAMLparam4 (win, x, y, c); mvwaddch(Window_val(win), Int_val(x), Int_val(y), Int_val(c)); CAMLreturn (Val_unit); } CAMLprim value caml_curses_addstr(value s) { CAMLparam1 (s); addstr(String_val(s)); CAMLreturn (Val_unit); } CAMLprim value caml_curses_mvwaddstr(value win, value x, value y, value s) { CAMLparam4 (win, x, y, s); mvwaddstr(Window_val(win), Int_val(x), Int_val(y), String_val(s)); CAMLreturn (Val_unit); } /* This goes on for pages. */
The file curses_stubs.c can be compiled with:
cc -c -I`ocamlc -where` curses_stubs.c
or, even simpler,
ocamlc -c curses_stubs.c
(When passed a .c file, the ocamlc command simply calls the C compiler on that file, with the right -I option.)
Now, here is a sample OCaml program prog.ml that uses the curses module:
(* File prog.ml -- main program using curses *) open Curses;; let main_window = initscr () in let small_window = newwin 10 5 20 10 in mvwaddstr main_window 10 2 "Hello"; mvwaddstr small_window 4 3 "world"; refresh(); Unix.sleep 5; endwin()
To compile and link this program, run:
ocamlc -custom -o prog unix.cma curses.cmo prog.ml curses_stubs.o -cclib -lcurses
(On some machines, you may need to put -cclib -lcurses -cclib -ltermcap or -cclib -ltermcap instead of -cclib -lcurses.)
So far, we have described how to call C functions from OCaml. In this section, we show how C functions can call OCaml functions, either as callbacks (OCaml calls C which calls OCaml), or with the main program written in C.
C functions can apply OCaml function values (closures) to OCaml values. The following functions are provided to perform the applications:
If the function f does not return, but raises an exception that escapes the scope of the application, then this exception is propagated to the next enclosing OCaml code, skipping over the C code. That is, if an OCaml function f calls a C function g that calls back an OCaml function h that raises a stray exception, then the execution of g is interrupted and the exception is propagated back into f.
If the C code wishes to catch exceptions escaping the OCaml function, it can use the functions caml_callback_exn, caml_callback2_exn, caml_callback3_exn, caml_callbackN_exn. These functions take the same arguments as their non-_exn counterparts, but catch escaping exceptions and return them to the C code. The return value v of the caml_callback*_exn functions must be tested with the macro Is_exception_result(v). If the macro returns âfalseâ, no exception occurred, and v is the value returned by the OCaml function. If Is_exception_result(v) returns âtrueâ, an exception escaped, and its value (the exception descriptor) can be recovered using Extract_exception(v).
If the OCaml function returned with an exception, Extract_exception should be applied to the exception result prior to calling a function that may trigger garbage collection. Otherwise, if v is reachable during garbage collection, the runtime can crash since v does not contain a valid value.
Example:
CAMLprim value call_caml_f_ex(value closure, value arg) { CAMLparam2(closure, arg); CAMLlocal2(res, tmp); res = caml_callback_exn(closure, arg); if(Is_exception_result(res)) { res = Extract_exception(res); tmp = caml_alloc(3, 0); /* Safe to allocate: res contains valid value. */ ... } CAMLreturn (res); }
There are two ways to obtain OCaml function values (closures) to be passed to the callback functions described above. One way is to pass the OCaml function as an argument to a primitive function. For example, if the OCaml code contains the declaration
external apply : ('a -> 'b) -> 'a -> 'b = "caml_apply"
the corresponding C stub can be written as follows:
CAMLprim value caml_apply(value vf, value vx) { CAMLparam2(vf, vx); CAMLlocal1(vy); vy = caml_callback(vf, vx); CAMLreturn(vy); }
Another possibility is to use the registration mechanism provided by OCaml. This registration mechanism enables OCaml code to register OCaml functions under some global name, and C code to retrieve the corresponding closure by this global name.
On the OCaml side, registration is performed by evaluating Callback.register n v. Here, n is the global name (an arbitrary string) and v the OCaml value. For instance:
let f x = print_string "f is applied to "; print_int x; print_newline() let _ = Callback.register "test function" f
On the C side, a pointer to the value registered under name n is obtained by calling caml_named_value(n). The returned pointer must then be dereferenced to recover the actual OCaml value. If no value is registered under the name n, the null pointer is returned. For example, here is a C wrapper that calls the OCaml function f above:
void call_caml_f(int arg) { caml_callback(*caml_named_value("test function"), Val_int(arg)); }
The pointer returned by caml_named_value is constant and can safely be cached in a C variable to avoid repeated name lookups. The value pointed to cannot be changed from C. However, it might change during garbage collection, so must always be recomputed at the point of use. Here is a more efficient variant of call_caml_f above that calls caml_named_value only once:
void call_caml_f(int arg) { static const value * closure_f = NULL; if (closure_f == NULL) { /* First time around, look up by name */ closure_f = caml_named_value("test function"); } caml_callback(*closure_f, Val_int(arg)); }
The registration mechanism described above can also be used to communicate exception identifiers from OCaml to C. The OCaml code registers the exception by evaluating Callback.register_exception n exn, where n is an arbitrary name and exn is an exception value of the exception to register. For example:
exception Error of string let _ = Callback.register_exception "test exception" (Error "any string")
The C code can then recover the exception identifier using caml_named_value and pass it as first argument to the functions raise_constant, raise_with_arg, and raise_with_string (described in sectionââ22.4.5) to actually raise the exception. For example, here is a C function that raises the Error exception with the given argument:
void raise_error(char * msg) { caml_raise_with_string(*caml_named_value("test exception"), msg); }
In normal operation, a mixed OCaml/C program starts by executing the OCaml initialization code, which then may proceed to call C functions. We say that the main program is the OCaml code. In some applications, it is desirable that the C code plays the role of the main program, calling OCaml functions when needed. This can be achieved as follows:
The bytecode compiler in custom runtime mode (ocamlc -custom) normally appends the bytecode to the executable file containing the custom runtime. This has two consequences. First, the final linking step must be performed by ocamlc. Second, the OCaml runtime library must be able to find the name of the executable file from the command-line arguments. When using caml_main(argv) as in sectionââ22.7.4, this means that argv[0] or argv[1] must contain the executable file name.
An alternative is to embed the bytecode in the C code. The -output-obj and -output-complete-obj options to ocamlc are provided for this purpose. They cause the ocamlc compiler to output a C object file (.o file, .obj under Windows) containing the bytecode for the OCaml part of the program, as well as a caml_startup function. The C object file produced by ocamlc -output-complete-obj also contains the runtime and autolink libraries. The C object file produced by ocamlc -output-obj or ocamlc -output-complete-obj can then be linked with C code using the standard C compiler, or stored in a C library.
The caml_startup function must be called from the main C program in order to initialize the OCaml runtime and execute the OCaml initialization code. Just like caml_main, it takes one argv parameter containing the command-line parameters. Unlike caml_main, this argv parameter is used only to initialize Sys.argv, but not for finding the name of the executable file.
The caml_startup function calls the uncaught exception handler (or enters the debugger, if running under ocamldebug) if an exception escapes from a top-level module initialiser. Such exceptions may be caught in the C code by instead using the caml_startup_exn function and testing the result using Is_exception_result (followed by Extract_exception if appropriate).
The -output-obj and -output-complete-obj options can also be used to obtain the C source file. More interestingly, these options can also produce directly a shared library (.so file, .dll under Windows) that contains the OCaml code, the OCaml runtime system and any other static C code given to ocamlc (.o, .a, respectively, .obj, .lib). This use of -output-obj and -output-complete-obj is very similar to a normal linking step, but instead of producing a main program that automatically runs the OCaml code, it produces a shared library that can run the OCaml code on demand. The three possible behaviors of -output-obj and -output-complete-obj (to produce a C source code .c, a C object file .o, a shared library .so), are selected according to the extension of the resulting file (given with -o).
The native-code compiler ocamlopt also supports the -output-obj and -output-complete-obj options, causing it to output a C object file or a shared library containing the native code for all OCaml modules on the command-line, as well as the OCaml startup code. Initialization is performed by calling caml_startup (or caml_startup_exn) as in the case of the bytecode compiler. The file produced by ocamlopt -output-complete-obj also contains the runtime and autolink libraries.
For the final linking phase, in addition to the object file produced by -output-obj, you will have to provide the OCaml runtime library (libcamlrun.a for bytecode, libasmrun.a for native-code), as well as all C libraries that are required by the OCaml libraries used. For instance, assume the OCaml part of your program uses the Unix library. With ocamlc, you should do:
ocamlc -output-obj -o camlcode.o unix.cma other .cmo and .cma files cc -o myprog C objects and libraries \ camlcode.o -Lâocamlc -whereâ -lunix -lcamlrun
With ocamlopt, you should do:
ocamlopt -output-obj -o camlcode.o unix.cmxa other .cmx and .cmxa files cc -o myprog C objects and libraries \ camlcode.o -Lâocamlc -whereâ -lunix -lasmrun
For the final linking phase, in addition to the object file produced by -output-complete-obj, you will have only to provide the C libraries required by the OCaml runtime.
For instance, assume the OCaml part of your program uses the Unix library. With ocamlc, you should do:
ocamlc -output-complete-obj -o camlcode.o unix.cma other .cmo and .cma files cc -o myprog C objects and libraries \ camlcode.o C libraries required by the runtime, eg -lm -ldl -lcurses -lpthread
With ocamlopt, you should do:
ocamlopt -output-complete-obj -o camlcode.o unix.cmxa other .cmx and .cmxa files cc -o myprog C objects and libraries \ camlcode.o C libraries required by the runtime, eg -lm -ldl
On some ports, special options are required on the final linking phase that links together the object file produced by the -output-obj and -output-complete-obj options and the remainder of the program. Those options are shown in the configuration file Makefile.config generated during compilation of OCaml, as the variable OC_LDFLAGS.
When OCaml bytecode produced by ocamlc -g is embedded in a C program, no debugging information is included, and therefore it is impossible to print stack backtraces on uncaught exceptions. This is not the case when native code produced by ocamlopt -g is embedded in a C program: stack backtrace information is available, but the backtrace mechanism needs to be turned on programmatically. This can be achieved from the OCaml side by calling Printexc.record_backtrace true in the initialization of one of the OCaml modules. This can also be achieved from the C side by calling caml_record_backtraces(1); in the OCaml-C glue code. (caml_record_backtraces is declared in backtrace.h)
In case the shared library produced with -output-obj is to be loaded and unloaded repeatedly by a single process, care must be taken to unload the OCaml runtime explicitly, in order to avoid various system resource leaks.
Since 4.05, caml_shutdown function can be used to shut the runtime down gracefully, which equals the following:
As a shared library may have several clients simultaneously, it is made for convenience that caml_startup (and caml_startup_pooled) may be called multiple times, given that each such call is paired with a corresponding call to caml_shutdown (in a nested fashion). The runtime will be unloaded once there are no outstanding calls to caml_startup.
Once a runtime is unloaded, it cannot be started up again without reloading the shared library and reinitializing its static data. Therefore, at the moment, the facility is only useful for building reloadable shared libraries.
Depending on the target platform and operating system, the native-code runtime system may install signal handlers for one or several of the SIGSEGV, SIGTRAP and SIGFPE signals when caml_startup is called, and reset these signals to their default behaviors when caml_shutdown is called. The main program written inââC should not try to handle these signals itself.
This section illustrates the callback facilities described in sectionââ22.7. We are going to package some OCaml functions in such a way that they can be linked with C code and called from C just like any C functions. The OCaml functions are defined in the following mod.ml OCaml source:
(* File mod.ml -- some "useful" OCaml functions *) let rec fib n = if n < 2 then 1 else fib(n-1) + fib(n-2) let format_result n = Printf.sprintf "Result is: %d\n" n (* Export those two functions to C *) let _ = Callback.register "fib" fib let _ = Callback.register "format_result" format_result
Here is the C stub code for calling these functions from C:
/* File modwrap.c -- wrappers around the OCaml functions */ #include <stdio.h> #include <string.h> #include <caml/mlvalues.h> #include <caml/callback.h> int fib(int n) { static const value * fib_closure = NULL; if (fib_closure == NULL) fib_closure = caml_named_value("fib"); return Int_val(caml_callback(*fib_closure, Val_int(n))); } char * format_result(int n) { static const value * format_result_closure = NULL; if (format_result_closure == NULL) format_result_closure = caml_named_value("format_result"); return strdup(String_val(caml_callback(*format_result_closure, Val_int(n)))); /* We copy the C string returned by String_val to the C heap so that it remains valid after garbage collection. */ }
We now compile the OCaml code to a C object file and put it in a C library along with the stub code in modwrap.c and the OCaml runtime system:
ocamlc -custom -output-obj -o modcaml.o mod.ml ocamlc -c modwrap.c cp `ocamlc -where`/libcamlrun.a mod.a && chmod +w mod.a ar r mod.a modcaml.o modwrap.o
(One can also use ocamlopt -output-obj instead of ocamlc -custom -output-obj. In this case, replace libcamlrun.a (the bytecode runtime library) by libasmrun.a (the native-code runtime library).)
Now, we can use the two functions fib and format_result in any C program, just like regular C functions. Just remember to call caml_startup (or caml_startup_exn) once before.
/* File main.c -- a sample client for the OCaml functions */ #include <stdio.h> #include <caml/callback.h> extern int fib(int n); extern char * format_result(int n); int main(int argc, char ** argv) { int result; /* Initialize OCaml code */ caml_startup(argv); /* Do some computation */ result = fib(10); printf("fib(10) = %s\n", format_result(result)); return 0; }
To build the whole program, just invoke the C compiler as follows:
cc -o prog -I `ocamlc -where` main.c mod.a -lcurses
(On some machines, you may need to put -ltermcap or -lcurses -ltermcap instead of -lcurses.)
Blocks with tag Custom_tag contain both arbitrary user data and a pointer to a C struct, with type struct custom_operations, that associates user-provided finalization, comparison, hashing, serialization and deserialization functions to this block.
The struct custom_operations is defined in <caml/custom.h> and contains the following fields:
The compare field can be set to custom_compare_default; this default comparison function simply raises Failure.
The compare_ext field can be set to custom_compare_ext_default; this default comparison function simply raises Failure.
The hash field can be set to custom_hash_default, in which case the custom block is ignored during hash computation.
The serialize field can be set to custom_serialize_default, in which case the Failure exception is raised when attempting to serialize the custom block.
The deserialize field can be set to custom_deserialize_default to indicate that deserialization is not supported. In this case, do not register the struct custom_operations with the deserializer using register_custom_operations (see below).
Note: the finalize, compare, hash, serialize and deserialize functions attached to custom block descriptors must never trigger a garbage collection. Within these functions, do not call any of the OCaml allocation functions, and do not perform a callback into OCaml code. Do not use CAMLparam to register the parameters to these functions, and do not use CAMLreturn to return the result.
Custom blocks must be allocated via caml_alloc_custom or caml_alloc_custom_mem:
returns a fresh custom block, with room for size bytes of user data, and whose associated operations are given by ops (a pointer to a struct custom_operations, usually statically allocated as a C global variable).
The two parameters used and max are used to control the speed of garbage collection when the finalized object contains pointers to out-of-heap resources. Generally speaking, the OCaml incremental major collector adjusts its speed relative to the allocation rate of the program. The faster the program allocates, the harder the GC works in order to reclaim quickly unreachable blocks and avoid having large amount of âfloating garbageâ (unreferenced objects that the GC has not yet collected).
Normally, the allocation rate is measured by counting the in-heap size of allocated blocks. However, it often happens that finalized objects contain pointers to out-of-heap memory blocks and other resources (such as file descriptors, X Windows bitmaps, etc.). For those blocks, the in-heap size of blocks is not a good measure of the quantity of resources allocated by the program.
The two arguments used and max give the GC an idea of how much out-of-heap resources are consumed by the finalized block being allocated: you give the amount of resources allocated to this object as parameter used, and the maximum amount that you want to see in floating garbage as parameter max. The units are arbitrary: the GC cares only about the ratio used / max.
For instance, if you are allocating a finalized block holding an X Windows bitmap of w by h pixels, and youâd rather not have more than 1 mega-pixels of unreclaimed bitmaps, specify used = w * h and max = 1000000.
Another way to describe the effect of the used and max parameters is in terms of full GC cycles. If you allocate many custom blocks with used / max = 1 / N, the GC will then do one full cycle (examining every object in the heap and calling finalization functions on those that are unreachable) every N allocations. For instance, if used = 1 and max = 1000, the GC will do one full cycle at least every 1000 allocations of custom blocks.
If your finalized blocks contain no pointers to out-of-heap resources, or if the previous discussion made little sense to you, just take used = 0 and max = 1. But if you later find that the finalization functions are not called âoften enoughâ, consider increasing the used / max ratio.
Use this function when your custom block holds only out-of-heap memory (memory allocated with malloc or caml_stat_alloc) and no other resources. used should be the number of bytes of out-of-heap memory that are held by your custom block. This function works like caml_alloc_custom except that the max parameter is under the control of the user (via the custom_major_ratio, custom_minor_ratio, and custom_minor_max_size parameters) and proportional to the heap sizes. It has been available since OCaml 4.08.0.
The data part of a custom block v can be accessed via the pointer Data_custom_val(v). This pointer has type void * and should be cast to the actual type of the data stored in the custom block.
The contents of custom blocks are not scanned by the garbage collector, and must therefore not contain any pointer inside the OCaml heap. In other terms, never store an OCaml value in a custom block, and do not use Field, Store_field nor caml_modify to access the data part of a custom block. Conversely, any C data structure (not containing heap pointers) can be stored in a custom block.
The following functions, defined in <caml/intext.h>, are provided to write and read back the contents of custom blocks in a portable way. Those functions handle endianness conversions when e.g. data is written on a little-endian machine and read back on a big-endian machine.
Function | Action |
caml_serialize_int_1 | Write a 1-byte integer |
caml_serialize_int_2 | Write a 2-byte integer |
caml_serialize_int_4 | Write a 4-byte integer |
caml_serialize_int_8 | Write a 8-byte integer |
caml_serialize_float_4 | Write a 4-byte float |
caml_serialize_float_8 | Write a 8-byte float |
caml_serialize_block_1 | Write an array of 1-byte quantities |
caml_serialize_block_2 | Write an array of 2-byte quantities |
caml_serialize_block_4 | Write an array of 4-byte quantities |
caml_serialize_block_8 | Write an array of 8-byte quantities |
caml_deserialize_uint_1 | Read an unsigned 1-byte integer |
caml_deserialize_sint_1 | Read a signed 1-byte integer |
caml_deserialize_uint_2 | Read an unsigned 2-byte integer |
caml_deserialize_sint_2 | Read a signed 2-byte integer |
caml_deserialize_uint_4 | Read an unsigned 4-byte integer |
caml_deserialize_sint_4 | Read a signed 4-byte integer |
caml_deserialize_uint_8 | Read an unsigned 8-byte integer |
caml_deserialize_sint_8 | Read a signed 8-byte integer |
caml_deserialize_float_4 | Read a 4-byte float |
caml_deserialize_float_8 | Read an 8-byte float |
caml_deserialize_block_1 | Read an array of 1-byte quantities |
caml_deserialize_block_2 | Read an array of 2-byte quantities |
caml_deserialize_block_4 | Read an array of 4-byte quantities |
caml_deserialize_block_8 | Read an array of 8-byte quantities |
caml_deserialize_error | Signal an error during deserialization; input_value or Marshal.from_... raise a Failure exception after cleaning up their internal data structures |
Serialization functions are attached to the custom blocks to which they apply. Obviously, deserialization functions cannot be attached this way, since the custom block does not exist yet when deserialization begins! Thus, the struct custom_operations that contain deserialization functions must be registered with the deserializer in advance, using the register_custom_operations function declared in <caml/custom.h>. Deserialization proceeds by reading the identifier off the input stream, allocating a custom block of the size specified in the input stream, searching the registered struct custom_operation blocks for one with the same identifier, and calling its deserialize function to fill the data part of the custom block.
Identifiers in struct custom_operations must be chosen carefully, since they must identify uniquely the data structure for serialization and deserialization operations. In particular, consider including a version number in the identifier; this way, the format of the data can be changed later, yet backward-compatible deserialisation functions can be provided.
Identifiers starting with _ (an underscore character) are reserved for the OCaml runtime system; do not use them for your custom data. We recommend to use a URL (http://mymachine.mydomain.com/mylibrary/version-number) or a Java-style package name (com.mydomain.mymachine.mylibrary.version-number) as identifiers, to minimize the risk of identifier collision.
Custom blocks generalize the finalized blocks that were present in OCaml prior to version 3.00. For backward compatibility, the format of custom blocks is compatible with that of finalized blocks, and the alloc_final function is still available to allocate a custom block with a given finalization function, but default comparison, hashing and serialization functions. caml_alloc_final(n, f, used, max) returns a fresh custom block of size n+1 words, with finalization function f. The first word is reserved for storing the custom operations; the other n words are available for your data. The two parameters used and max are used to control the speed of garbage collection, as described for caml_alloc_custom.
This section explains how C stub code that interfaces C or Fortran code with OCaml code can use Bigarrays.
The include file <caml/bigarray.h> must be included in the C stub file. It declares the functions, constants and macros discussed below.
If v is a OCaml value representing a Bigarray, the expression Caml_ba_data_val(v) returns a pointer to the data part of the array. This pointer is of type void * and can be cast to the appropriate C type for the array (e.g. double [], char [][10], etc).
Various characteristics of the OCaml Bigarray can be consulted from C as follows:
C expression | Returns |
Caml_ba_array_val(v)->num_dims | number of dimensions |
Caml_ba_array_val(v)->dim[i] | i-th dimension |
Caml_ba_array_val(v)->flags & BIGARRAY_KIND_MASK | kind of array elements |
The kind of array elements is one of the following constants:
Constant | Element kind |
CAML_BA_FLOAT32 | 32-bit single-precision floats |
CAML_BA_FLOAT64 | 64-bit double-precision floats |
CAML_BA_SINT8 | 8-bit signed integers |
CAML_BA_UINT8 | 8-bit unsigned integers |
CAML_BA_SINT16 | 16-bit signed integers |
CAML_BA_UINT16 | 16-bit unsigned integers |
CAML_BA_INT32 | 32-bit signed integers |
CAML_BA_INT64 | 64-bit signed integers |
CAML_BA_CAML_INT | 31- or 63-bit signed integers |
CAML_BA_NATIVE_INT | 32- or 64-bit (platform-native) integers |
CAML_BA_COMPLEX32 | 32-bit single-precision complex numbers |
CAML_BA_COMPLEX64 | 64-bit double-precision complex numbers |
CAML_BA_CHAR | 8-bit characters |
Caml_ba_array_val(v) must always be dereferenced immediately and not stored anywhere, including local variables. It resolves to a derived pointer: it is not a valid OCaml value but points to a memory region managed by the GC. For this reason this value must not be stored in any memory location that could be live cross a GC.
The following example shows the passing of a two-dimensional Bigarray to a C function and a Fortran function.
extern void my_c_function(double * data, int dimx, int dimy); extern void my_fortran_function_(double * data, int * dimx, int * dimy); CAMLprim value caml_stub(value bigarray) { int dimx = Caml_ba_array_val(bigarray)->dim[0]; int dimy = Caml_ba_array_val(bigarray)->dim[1]; /* C passes scalar parameters by value */ my_c_function(Caml_ba_data_val(bigarray), dimx, dimy); /* Fortran passes all parameters by reference */ my_fortran_function_(Caml_ba_data_val(bigarray), &dimx, &dimy); return Val_unit; }
A pointer p to an already-allocated C or Fortran array can be wrapped and returned to OCaml as a Bigarray using the caml_ba_alloc or caml_ba_alloc_dims functions.
Return an OCaml Bigarray wrapping the data pointed to by p. kind is the kind of array elements (one of the CAML_BA_ kind constants above). layout is CAML_BA_C_LAYOUT for an array with C layout and CAML_BA_FORTRAN_LAYOUT for an array with Fortran layout. numdims is the number of dimensions in the array. dims is an array of numdims long integers, giving the sizes of the array in each dimension.
Same as caml_ba_alloc, but the sizes of the array in each dimension are listed as extra arguments in the function call, rather than being passed as an array.
The following example illustrates how statically-allocated C and Fortran arrays can be made available to OCaml.
extern long my_c_array[100][200]; extern float my_fortran_array_[300][400]; CAMLprim value caml_get_c_array(value unit) { long dims[2]; dims[0] = 100; dims[1] = 200; return caml_ba_alloc(CAML_BA_NATIVE_INT | CAML_BA_C_LAYOUT, 2, my_c_array, dims); } CAMLprim value caml_get_fortran_array(value unit) { return caml_ba_alloc_dims(CAML_BA_FLOAT32 | CAML_BA_FORTRAN_LAYOUT, 2, my_fortran_array_, 300L, 400L); }
This section describe how to make calling C functions cheaper.
Note: this only applies to the native compiler. So whenever you use any of these methods, you have to provide an alternative byte-code stub that ignores all the special annotations.
We said earlier that all OCaml objects are represented by the C type value, and one has to use macros such as Int_val to decode data from the value type. It is however possible to tell the OCaml native-code compiler to do this for us and pass arguments unboxed to the C function. Similarly it is possible to tell OCaml to expect the result unboxed and box it for us.
The motivation is that, by letting âocamloptâ deal with boxing, it can often decide to suppress it entirely.
For instance letâs consider this example:
external foo : float -> float -> float = "foo" let f a b = let len = Array.length a in assert (Array.length b = len); let res = Array.make len 0. in for i = 0 to len - 1 do res.(i) <- foo a.(i) b.(i) done
Float arrays are unboxed in OCaml, however the C function foo expect its arguments as boxed floats and returns a boxed float. Hence the OCaml compiler has no choice but to box a.(i) and b.(i) and unbox the result of foo. This results in the allocation of 3 * len temporary float values.
Now if we annotate the arguments and result with [@unboxed], the native-code compiler will be able to avoid all these allocations:
external foo : (float [@unboxed]) -> (float [@unboxed]) -> (float [@unboxed]) = "foo_byte" "foo"
In this case the C functions must look like:
CAMLprim double foo(double a, double b) { ... } CAMLprim value foo_byte(value a, value b) { return caml_copy_double(foo(Double_val(a), Double_val(b))) }
For convenience, when all arguments and the result are annotated with [@unboxed], it is possible to put the attribute only once on the declaration itself. So we can also write instead:
external foo : float -> float -> float = "foo_byte" "foo" [@@unboxed]
The following table summarize what OCaml types can be unboxed, and what C types should be used in correspondence:
OCaml type | C type |
float | double |
int32 | int32_t |
int64 | int64_t |
nativeint | intnat |
Similarly, it is possible to pass untagged OCaml integers between OCaml and C. This is done by annotating the arguments and/or result with [@untagged]:
external f : string -> (int [@untagged]) = "f_byte" "f"
The corresponding C type must be intnat.
Note: do not use the C int type in correspondence with (int [@untagged]). This is because they often differ in size.
In order to be able to run the garbage collector in the middle of a C function, the OCaml native-code compiler generates some bookkeeping code around C calls. Technically it wraps every C call with the C function caml_c_call which is part of the OCaml runtime.
For small functions that are called repeatedly, this indirection can have a big impact on performances. However this is not needed if we know that the C function doesnât allocate, doesnât raise exceptions, and doesnât release the domain lock (see sectionââ22.12.2). We can instruct the OCaml native-code compiler of this fact by annotating the external declaration with the attribute [@@noalloc]:
external bar : int -> int -> int = "foo" [@@noalloc]
In this case calling bar from OCaml is as cheap as calling any other OCaml function, except for the fact that the OCaml compiler canât inline C functions...
Using these attributes, it is possible to call C library functions with no indirection. For instance many math functions are defined this way in the OCaml standard library:
external sqrt : float -> float = "caml_sqrt_float" "sqrt" [@@unboxed] [@@noalloc] (** Square root. *) external exp : float -> float = "caml_exp_float" "exp" [@@unboxed] [@@noalloc] (** Exponential. *) external log : float -> float = "caml_log_float" "log" [@@unboxed] [@@noalloc] (** Natural logarithm. *)
Using multiple threads (shared-memory concurrency) in a mixed OCaml/C application requires special precautions, which are described in this section.
Callbacks from C to OCaml are possible only if the calling thread is known to the OCaml run-time system. Threads created from OCaml (through the Thread.create function of the system threads library) are automatically known to the run-time system. If the application creates additional threads from C and wishes to callback into OCaml code from these threads, it must first register them with the run-time system. The following functions are declared in the include file <caml/threads.h>.
Domains are the unit of parallelism for OCaml programs. When using the systhreads library, multiple threads might be attached to the same domain. However, at any time, at most one of those thread can be executing OCaml code or C code that uses the OCaml run-time system by domain. Technically, this is enforced by a âdomain lockâ that any thread must hold while executing such code within a domain.
When OCaml calls the C code implementing a primitive, the domain lock is held, therefore the C code has full access to the facilities of the run-time system. However, no other thread in the same domain can execute OCaml code concurrently with the C code of the primitive. See also chapterââ9.6 for the behaviour with multiple domains.
If a C primitive runs for a long time or performs potentially blocking input-output operations, it can explicitly release the domain lock, enabling other OCaml threads in the same domain to run concurrently with its operations. The C code must re-acquire the domain lock before returning to OCaml. This is achieved with the following functions, declared in the include file <caml/threads.h>.
These functions poll for pending signals by calling asynchronous callbacks (sectionââ22.5.3) before releasing and after acquiring the lock. They can therefore execute arbitrary OCaml code including raising an asynchronous exception.
After caml_release_runtime_system() was called and until caml_acquire_runtime_system() is called, the C code must not access any OCaml data, nor call any function of the run-time system, nor call back into OCaml code. Consequently, arguments provided by OCaml to the C primitive must be copied into C data structures before calling caml_release_runtime_system(), and results to be returned to OCaml must be encoded as OCaml values after caml_acquire_runtime_system() returns.
Example: the following C primitive invokes gethostbyname to find the IP address of a host name. The gethostbyname function can block for a long time, so we choose to release the OCaml run-time system while it is running.
CAMLprim stub_gethostbyname(value vname) { CAMLparam1 (vname); CAMLlocal1 (vres); struct hostent * h; char * name; /* Copy the string argument to a C string, allocated outside the OCaml heap. */ name = caml_stat_strdup(String_val(vname)); /* Release the OCaml run-time system */ caml_release_runtime_system(); /* Resolve the name */ h = gethostbyname(name); /* Free the copy of the string, which we might as well do before acquiring the runtime system to benefit from parallelism. */ caml_stat_free(name); /* Re-acquire the OCaml run-time system */ caml_acquire_runtime_system(); /* Encode the relevant fields of h as the OCaml value vres */ ... /* Omitted */ /* Return to OCaml */ CAMLreturn (vres); }
The macro Caml_state evaluates to the domain state variable, and checks in debug mode that the domain lock is held. Such a check is also placed in normal mode at key entry points of the C API; this is why calling some of the runtime functions and macros without correctly owning the domain lock can result in a fatal error: no domain lock held. The variant Caml_state_opt does not perform any check but evaluates to NULL when the domain lock is not held. This lets you determine whether a thread belonging to a domain currently holds its domain lock, for various purposes.
Callbacks from C to OCaml must be performed while holding the domain lock to the OCaml run-time system. This is naturally the case if the callback is performed by a C primitive that did not release the run-time system. If the C primitive released the run-time system previously, or the callback is performed from other C code that was not invoked from OCaml (e.g. an event loop in a GUI application), the run-time system must be acquired before the callback and released after:
caml_acquire_runtime_system(); /* Resolve OCaml function vfun to be invoked */ /* Build OCaml argument varg to the callback */ vres = callback(vfun, varg); /* Copy relevant parts of result vres to C data structures */ caml_release_runtime_system();
Note: the acquire and release functions described above were introduced in OCaml 3.12. Older code uses the following historical names, declared in <caml/signals.h>:
Intuition: a âblocking sectionâ is a piece of C code that does not use the OCaml run-time system, typically a blocking input/output operation.
This section contains some general guidelines for writing C stubs that use Windows Unicode APIs.
The OCaml system under Windows can be configured at build time in one of two modes:
In what follows, we say that a string has the OCaml encoding if it is encoded in UTF-8 when in Unicode mode, in the current code page in legacy mode, or is an arbitrary string under Unix. A string has the platform encoding if it is encoded in UTF-16 under Windows or is an arbitrary string under Unix.
From the point of view of the writer of C stubs, the challenges of interacting with Windows Unicode APIs are twofold:
The native C character type under Windows is WCHAR, two bytes wide, while under Unix it is char, one byte wide. A type char_os is defined in <caml/misc.h> that stands for the concrete C character type of each platform. Strings in the platform encoding are of type char_os *.
The following functions are exposed to help write compatible C stubs. To use them, you need to include both <caml/misc.h> and <caml/osdeps.h>.
Note: For maximum backwards compatibility in Unicode mode, if the argument is not a valid UTF-8 string, this function will fall back to assuming that it is encoded in the current code page.
Note: The strings returned by caml_stat_strdup_to_os and caml_stat_strdup_of_os are allocated using caml_stat_alloc, so they need to be deallocated using caml_stat_free when they are no longer needed.
We want to bind the function getenv in a way that works both under Unix and Windows. Under Unix this function has the prototype:
char *getenv(const char *);
While the Unicode version under Windows has the prototype:
WCHAR *_wgetenv(const WCHAR *);
In terms of char_os, both functions take an argument of type char_os * and return a result of the same type. We begin by choosing the right implementation of the function to bind:
#ifdef _WIN32 #define getenv_os _wgetenv #else #define getenv_os getenv #endif
The rest of the binding is the same for both platforms:
#include <caml/mlvalues.h> #include <caml/misc.h> #include <caml/alloc.h> #include <caml/fail.h> #include <caml/osdeps.h> #include <stdlib.h> CAMLprim value stub_getenv(value var_name) { CAMLparam1(var_name); CAMLlocal1(var_value); char_os *var_name_os, *var_value_os; var_name_os = caml_stat_strdup_to_os(String_val(var_name)); var_value_os = getenv_os(var_name_os); caml_stat_free(var_name_os); if (var_value_os == NULL) caml_raise_not_found(); var_value = caml_copy_string_of_os(var_value_os); CAMLreturn(var_value); }
The ocamlmklib command facilitates the construction of libraries containing both OCaml code and C code, and usable both in static linking and dynamic linking modes. This command is available under Windows since Objective Caml 3.11 and under other operating systems since Objective Caml 3.03.
The ocamlmklib command takes three kinds of arguments:
It generates the following outputs:
In addition, the following options are recognized:
On native Windows, the following environment variable is also consulted:
Consider an OCaml interface to the standard libz C library for reading and writing compressed files. Assume this library resides in /usr/local/zlib. This interface is composed of an OCaml part zip.cmo/zip.cmx and a C part zipstubs.o containing the stub code around the libz entry points. The following command builds the OCaml libraries zip.cma and zip.cmxa, as well as the companion C libraries dllzip.so and libzip.a:
ocamlmklib -o zip zip.cmo zip.cmx zipstubs.o -lz -L/usr/local/zlib
If shared libraries are supported, this performs the following commands:
ocamlc -a -o zip.cma zip.cmo -dllib -lzip \ -cclib -lzip -cclib -lz -ccopt -L/usr/local/zlib ocamlopt -a -o zip.cmxa zip.cmx -cclib -lzip \ -cclib -lzip -cclib -lz -ccopt -L/usr/local/zlib gcc -shared -o dllzip.so zipstubs.o -lz -L/usr/local/zlib ar rc libzip.a zipstubs.o
Note: This example is on a Unix system. The exact command lines may be different on other systems.
If shared libraries are not supported, the following commands are performed instead:
ocamlc -a -custom -o zip.cma zip.cmo -cclib -lzip \ -cclib -lz -ccopt -L/usr/local/zlib ocamlopt -a -o zip.cmxa zip.cmx -lzip \ -cclib -lz -ccopt -L/usr/local/zlib ar rc libzip.a zipstubs.o
Instead of building simultaneously the bytecode library, the native-code library and the C libraries, ocamlmklib can be called three times to build each separately. Thus,
ocamlmklib -o zip zip.cmo -lz -L/usr/local/zlib
builds the bytecode library zip.cma, and
ocamlmklib -o zip zip.cmx -lz -L/usr/local/zlib
builds the native-code library zip.cmxa, and
ocamlmklib -o zip zipstubs.o -lz -L/usr/local/zlib
builds the C libraries dllzip.so and libzip.a. Notice that the support libraries (-lz) and the corresponding options (-L/usr/local/zlib) must be given on all three invocations of ocamlmklib, because they are needed at different times depending on whether shared libraries are supported.
Not all header available in the caml/ directory were described in previous sections. All those unmentioned headers are part of the internal runtime API, for which there is no stability guarantee. If you really need access to this internal runtime API, this section provides some guidelines that may help you to write code that might not break on every new version of OCaml.
Programmers which come to rely on the internal API for a use-case which they find realistic and useful are encouraged to open a request for improvement on the bug tracker.
Since OCaml 4.04, it is possible to get access to every part of the internal runtime API by defining the CAML_INTERNALS macro before loading caml header files. If this macro is not defined, parts of the internal runtime API are hidden.
If you are using internal C variables, do not redefine them by hand. You should import those variables by including the corresponding header files. The representation of those variables has already changed once in OCaml 4.10, and is still under evolution. If your code relies on such internal and brittle properties, it will be broken at some point in time.
For instance, rather than redefining caml_young_limit:
extern int caml_young_limit;
which breaks in OCaml â„ 4.10, you should include the minor_gc header:
#include <caml/minor_gc.h>
Finally, if including the right headers is not enough, or if you need to support version older than OCaml 4.04, the header file caml/version.h should help you to define your own compatibility layer. This file provides few macros defining the current OCaml version. In particular, the OCAML_VERSION macro describes the current version, its format is MmmPP. For example, if you need some specific handling for versions older than 4.10.0, you could write
#include <caml/version.h> #if OCAML_VERSION >= 41000 ... #else ... #endif
Flambda is the term used to describe a series of optimisation passes provided by the native code compilers as of OCaml 4.03.
Flambda aims to make it easier to write idiomatic OCaml code without incurring performance penalties.
To use the Flambda optimisers it is necessary to pass the -flambda option to the OCaml configure script. (There is no support for a single compiler that can operate in both Flambda and non-Flambda modes.) Code compiled with Flambda cannot be linked into the same program as code compiled without Flambda. Attempting to do this will result in a compiler error.
Whether or not a particular ocamlopt uses Flambda may be determined by invoking it with the -config option and looking for any line starting with âflambda:â. If such a line is present and says âtrueâ, then Flambda is supported, otherwise it is not.
Flambda provides full optimisation across different compilation units, so long as the .cmx files for the dependencies of the unit currently being compiled are available. (A compilation unit corresponds to a single .ml source file.) However it does not yet act entirely as a whole-program compiler: for example, elimination of dead code across a complete set of compilation units is not supported.
Optimisation with Flambda is not currently supported when generating bytecode.
Flambda should not in general affect the semantics of existing programs. Two exceptions to this rule are: possible elimination of pure code that is being benchmarked (see section 23.14) and changes in behaviour of code using unsafe operations (see section 23.15).
Flambda does not yet optimise array or string bounds checks. Neither does it take hints for optimisation from any assertions written by the user in the code.
Consult the Glossary at the end of this chapter for definitions of technical terms used below.
The Flambda optimisers provide a variety of command-line flags that may be used to control their behaviour. Detailed descriptions of each flag are given in the referenced sections. Those sections also describe any arguments which the particular flags take.
Commonly-used options:
Less commonly-used options:
Advanced options, only needed for detailed tuning:
Flambda operates in rounds: one round consists of a certain sequence of transformations that may then be repeated in order to achieve more satisfactory results. The number of rounds can be set manually using the -rounds parameter (although this is not necessary when using predefined optimisation levels such as with -O2 and -O3). For high optimisation the number of rounds might be set at 3 or 4.
Command-line flags that may apply per round, for example those with -cost in the name, accept arguments of the form:
The flags -Oclassic, -O2 and -O3 are applied before all other flags, meaning that certain parameters may be overridden without having to specify every parameter usually invoked by the given optimisation level.
Inlining refers to the copying of the code of a function to a place where the function is called. The code of the function will be surrounded by bindings of its parameters to the corresponding arguments.
The aims of inlining are:
These goals are often reached not just by inlining itself but also by other optimisations that the compiler is able to perform as a result of inlining.
When a recursive call to a function (within the definition of that function or another in the same mutually-recursive group) is inlined, the procedure is also known as unrolling. This is somewhat akin to loop peeling. For example, given the following code:
let rec fact x = if x = 0 then 1 else x * fact (x - 1) let n = fact 4
unrolling once at the call site fact 4 produces (with the body of fact unchanged):
let n = if 4 = 0 then 1 else 4 * fact (4 - 1)
This simplifies to:
let n = 4 * fact 3
Flambda provides significantly enhanced inlining capabilities relative to previous versions of the compiler.
Inlining is performed together with all of the other Flambda optimisation passes, that is to say, after closure conversion. This has three particular advantages over a potentially more straightforward implementation prior to closure conversion:
In -Oclassic mode the behaviour of the Flambda inliner mimics previous versions of the compiler. (Code may still be subject to further optimisations not performed by previous versions of the compiler: functors may be inlined, constants are lifted and unused code is eliminated all as described elsewhere in this chapter. See sections 23.3.3, 23.8.1 and 23.10. At the definition site of a function, the body of the function is measured. It will then be marked as eligible for inlining (and hence inlined at every direct call site) if:
Non-Flambda versions of the compiler cannot inline functions that contain a definition of another function. However -Oclassic does permit this. Further, non-Flambda versions also cannot inline functions that are only themselves exposed as a result of a previous pass of inlining, but again this is permitted by -Oclassic. For example:
module M : sig val i : int end = struct let f x = let g y = x + y in g let h = f 3 let i = h 4 (* h is correctly discovered to be g and inlined *) end
All of this contrasts with the normal Flambda mode, that is to say without -Oclassic, where:
The Flambda mode is described in the next section.
The Flambda inlining heuristics, used whenever the compiler is configured for Flambda and -Oclassic was not specified, make inlining decisions at call sites. This helps in situations where the context is important. For example:
let f b x = if b then x else ... big expression ... let g x = f true x
In this case, we would like to inline f into g, because a conditional jump can be eliminated and the code size should reduce. If the inlining decision has been made after the declaration of f without seeing the use, its size would have probably made it ineligible for inlining; but at the call site, its final size can be known. Further, this function should probably not be inlined systematically: if b is unknown, or indeed false, there is little benefit to trade off against a large increase in code size. In the existing non-Flambda inliner this isnât a great problem because chains of inlining were cut off fairly quickly. However it has led to excessive use of overly-large inlining parameters such as -inline 10000.
In more detail, at each call site the following procedure is followed:
Inlining within recursive functions of calls to other functions in the same mutually-recursive group is kept in check by an unrolling depth, described below. This ensures that functions are not unrolled to excess. (Unrolling is only enabled if -O3 optimisation level is selected and/or the -inline-max-unroll flag is passed with an argument greater than zero.)
There is nothing particular about functors that inhibits inlining compared to normal functions. To the inliner, these both look the same, except that functors are marked as such.
Applications of functors at toplevel are biased in favour of inlining. (This bias may be adjusted: see the documentation for -inline-lifting-benefit below.)
Applications of functors not at toplevel, for example in a local module inside some other expression, are treated by the inliner identically to normal function calls.
The inliner will be able to consider inlining a call to a function in a first class module if it knows which particular function is going to be called. The presence of the first-class module record that wraps the set of functions in the module does not per se inhibit inlining.
Method calls to objects are not at present inlined by Flambda.
If the -inlining-report option is provided to the compiler then a file will be emitted corresponding to each round of optimisation. For the OCaml source file basename.ml the files are named basename.round.inlining.org, with round a zero-based integer. Inside the files, which are formatted as âorg modeâ, will be found English prose describing the decisions that the inliner took.
Inlining typically results in an increase in code size, which if left unchecked, may not only lead to grossly large executables and excessive compilation times but also a decrease in performance due to worse locality. As such, the Flambda inliner trades off the change in code size against the expected runtime performance benefit, with the benefit being computed based on the number of operations that the compiler observes may be removed as a result of inlining.
For example given the following code:
let f b x = if b then x else ... big expression ... let g x = f true x
it would be observed that inlining of f would remove:
Formally, an estimate of runtime performance benefit is computed by first summing the cost of the operations that are known to be removed as a result of the inlining and subsequent simplification of the inlined body. The individual costs for the various kinds of operations may be adjusted using the various -inline-...-cost flags as follows. Costs are specified as integers. All of these flags accept a single argument describing such integers using the conventions detailed in section 23.2.1.
(Default values are described in section 23.5 below.)
The initial benefit value is then scaled by a factor that attempts to compensate for the fact that the current point in the code, if under some number of conditional branches, may be cold. (Flambda does not currently compute hot and cold paths.) The factorâthe estimated probability that the inliner really is on a hot pathâis calculated as 1/(1 + f)d, where f is set by -inline-branch-factor and d is the nesting depth of branches at the current point. As the inliner descends into more deeply-nested branches, the benefit of inlining thus lessens.
The resulting benefit value is known as the estimated benefit.
The change in code size is also estimated: morally speaking it should be the change in machine code size, but since that is not available to the inliner, an approximation is used.
If the estimated benefit exceeds the increase in code size then the inlined version of the function will be kept. Otherwise the function will not be inlined.
Applications of functors at toplevel will be given an additional benefit (which may be controlled by the -inline-lifting-benefit flag) to bias inlining in such situations towards keeping the inlined version.
As described above, there are three parameters that restrict the search for inlining opportunities during speculation:
These parameters are ultimately bounded by the arguments provided to the corresponding command-line flags (or their default values):
Note in particular that -inline does not have the meaning that it has in the previous compiler or in -Oclassic mode. In both of those situations -inline was effectively some kind of basic assessment of inlining benefit. However in Flambda inlining mode it corresponds to a constraint on the search; the assessment of benefit is independent, as described above.
When speculation starts the inlining threshold starts at the value set by -inline (or -inline-toplevel if appropriate, see above). Upon making a speculative inlining decision the threshold is reduced by the code size of the function being inlined. If the threshold becomes exhausted, at or below zero, no further speculation will be performed.
The inlining depth starts at zero and is increased by one every time the inliner descends into another function. It is then decreased by one every time the inliner leaves such function. If the depth exceeds the value set by -inline-max-depth then speculation stops. This parameter is intended as a general backstop for situations where the inlining threshold does not control the search sufficiently.
The unrolling depth applies to calls within the same mutually-recursive group of functions. Each time an inlining of such a call is performed the depth is incremented by one when examining the resulting body. If the depth reaches the limit set by -inline-max-unroll then speculation stops.
The inliner may discover a call site to a recursive function where something is known about the arguments: for example, they may be equal to some other variables currently in scope. In this situation it may be beneficial to specialise the function to those arguments. This is done by copying the declaration of the function (and any others involved in any same mutually-recursive declaration) and noting the extra information about the arguments. The arguments augmented by this information are known as specialised arguments. In order to try to ensure that specialisation is not performed uselessly, arguments are only specialised if it can be shown that they are invariant: in other words, during the execution of the recursive function(s) themselves, the arguments never change.
Unless overridden by an attribute (see below), specialisation of a function will not be attempted if:
The compiler can prove invariance of function arguments across multiple functions within a recursive group (although this has some limitations, as shown by the example below).
It should be noted that the unboxing of closures pass (see below) can introduce specialised arguments on non-recursive functions. (No other place in the compiler currently does this.)
This function might be written like so:
let rec iter f l = match l with | [] -> () | h :: t -> f h; iter f t
and used like this:
let print_int x = print_endline (Int.to_string x) let run xs = iter print_int (List.rev xs)
The argument f to iter is invariant so the function may be specialised:
let run xs = let rec iter' f l = (* The compiler knows: f holds the same value as foo throughout iter'. *) match l with | [] -> () | h :: t -> f h; iter' f t in iter' print_int (List.rev xs)
The compiler notes down that for the function iterâ, the argument f is specialised to the constant closure print_int. This means that the body of iterâ may be simplified:
let run xs = let rec iter' f l = (* The compiler knows: f holds the same value as foo throughout iter'. *) match l with | [] -> () | h :: t -> print_int h; (* this is now a direct call *) iter' f t in iter' print_int (List.rev xs)
The call to print_int can indeed be inlined:
let run xs = let rec iter' f l = (* The compiler knows: f holds the same value as foo throughout iter'. *) match l with | [] -> () | h :: t -> print_endline (Int.to_string h); iter' f t in iter' print_int (List.rev xs)
The unused specialised argument f may now be removed, leaving:
let run xs = let rec iter' l = match l with | [] -> () | h :: t -> print_endline (Int.to_string h); iter' t in iter' (List.rev xs)
The compiler cannot currently detect invariance in cases such as the following.
let rec iter_swap f g l = match l with | [] -> () | 0 :: t -> iter_swap g f l | h :: t -> f h; iter_swap f g t
The benefit of specialisation is assessed in a similar way as for inlining. Specialised argument information may mean that the body of the function being specialised can be simplified: the removed operations are accumulated into a benefit. This, together with the size of the duplicated (specialised) function declaration, is then assessed against the size of the call to the original function.
The default settings (when not using -Oclassic) are for one round of optimisation using the following parameters.
Parameter | Setting |
-inline | 10 |
-inline-branch-factor | 0.1 |
-inline-alloc-cost | 7 |
-inline-branch-cost | 5 |
-inline-call-cost | 5 |
-inline-indirect-cost | 4 |
-inline-prim-cost | 3 |
-inline-lifting-benefit | 1300 |
-inline-toplevel | 160 |
-inline-max-depth | 1 |
-inline-max-unroll | 0 |
-unbox-closures-factor | 10 |
When -O2 is specified two rounds of optimisation are performed. The first round uses the default parameters (see above). The second uses the following parameters.
Parameter | Setting |
-inline | 25 |
-inline-branch-factor | Same as default |
-inline-alloc-cost | Double the default |
-inline-branch-cost | Double the default |
-inline-call-cost | Double the default |
-inline-indirect-cost | Double the default |
-inline-prim-cost | Double the default |
-inline-lifting-benefit | Same as default |
-inline-toplevel | 400 |
-inline-max-depth | 2 |
-inline-max-unroll | Same as default |
-unbox-closures-factor | Same as default |
When -O3 is specified three rounds of optimisation are performed. The first two rounds are as for -O2. The third round uses the following parameters.
Parameter | Setting |
-inline | 50 |
-inline-branch-factor | Same as default |
-inline-alloc-cost | Triple the default |
-inline-branch-cost | Triple the default |
-inline-call-cost | Triple the default |
-inline-indirect-cost | Triple the default |
-inline-prim-cost | Triple the default |
-inline-lifting-benefit | Same as default |
-inline-toplevel | 800 |
-inline-max-depth | 3 |
-inline-max-unroll | 1 |
-unbox-closures-factor | Same as default |
Should the inliner prove recalcitrant and refuse to inline a particular function, or if the observed inlining decisions are not to the programmerâs satisfaction for some other reason, inlining behaviour can be dictated by the programmer directly in the source code. One example where this might be appropriate is when the programmer, but not the compiler, knows that a particular function call is on a cold code path. It might be desirable to prevent inlining of the function so that the code size along the hot path is kept smaller, so as to increase locality.
The inliner is directed using attributes. For non-recursive functions (and one-step unrolling of recursive functions, although @unroll is more clear for this purpose) the following are supported:
For recursive functions the relevant attributes are:
A compiler warning will be emitted if it was found impossible to obey an annotation from an @inlined or @specialised attribute.
module F (M : sig type t end) = struct let[@inline never] bar x = x * 3 let foo x = (bar [@inlined]) (42 + x) end [@@inline never] module X = F [@inlined] (struct type t = int end)
Simplification, which is run in conjunction with inlining, propagates information (known as approximations) about which variables hold what values at runtime. Certain relationships between variables and symbols are also tracked: for example, some variable may be known to always hold the same value as some other variable; or perhaps some variable may be known to always hold the value pointed to by some symbol.
The propagation can help to eliminate allocations in cases such as:
let f x y = ... let p = x, y in ... ... (fst p) ... (snd p) ...
The projections from p may be replaced by uses of the variables x and y, potentially meaning that p becomes unused.
The propagation performed by the simplification pass is also important for discovering which functions flow to indirect call sites. This can enable the transformation of such call sites into direct call sites, which makes them eligible for an inlining transformation.
Note that no information is propagated about the contents of strings, even in safe-string mode, because it cannot yet be guaranteed that they are immutable throughout a given program.
Expressions found to be constant will be lifted to symbol bindingsâthat is to say, they will be statically allocated in the object fileâwhen they evaluate to boxed values. Such constants may be straightforward numeric constants, such as the floating-point number 42.0, or more complicated values such as constant closures.
Lifting of constants to toplevel reduces allocation at runtime.
The compiler aims to share constants lifted to toplevel such that there are no duplicate definitions. However if .cmx files are hidden from the compiler then maximal sharing may not be possible.
The following language semantics apply specifically to constant float arrays. (By âconstant float arrayâ is meant an array consisting entirely of floating point numbers that are known at compile time. A common case is a literal such as [| 42.0; 43.0; |].
Toplevel let-expressions may be lifted to symbol bindings to ensure that the corresponding bound variables are not captured by closures. If the defining expression of a given binding is found to be constant, it is bound as such (the technical term is a let-symbol binding).
Otherwise, the symbol is bound to a (statically-allocated) preallocated block containing one field. At runtime, the defining expression will be evaluated and the first field of the block filled with the resulting value. This initialise-symbol binding causes one extra indirection but ensures, by virtue of the symbolâs address being known at compile time, that uses of the value are not captured by closures.
It should be noted that the blocks corresponding to initialise-symbol bindings are kept alive forever, by virtue of them occurring in a static table of GC roots within the object file. This extended lifetime of expressions may on occasion be surprising. If it is desired to create some non-constant value (for example when writing GC tests) that does not have this extended lifetime, then it may be created and used inside a function, with the application point of that function (perhaps at toplevel)âor indeed the function declaration itselfâmarked as to never be inlined. This technique prevents lifting of the definition of the value in question (assuming of course that it is not constant).
The transformations in this section relate to the splitting apart of boxed (that is to say, non-immediate) values. They are largely intended to reduce allocation, which tends to result in a runtime performance profile with lower variance and smaller tails.
This transformation is enabled unless -no-unbox-free-vars-of-closures is provided.
Variables that appear in closure environments may themselves be boxed values. As such, they may be split into further closure variables, each of which corresponds to some projection from the original closure variable(s). This transformation is called unboxing of closure variables or unboxing of free variables of closures. It is only applied when there is reasonable certainty that there are no uses of the boxed free variable itself within the corresponding function bodies.
In the following code, the compiler observes that the closure returned from the function f contains a variable pair (free in the body of f) that may be split into two separate variables.
let f x0 x1 = let pair = x0, x1 in Printf.printf "foo\n"; fun y -> fst pair + snd pair + y
After some simplification one obtains:
let f x0 x1 = let pair_0 = x0 in let pair_1 = x1 in Printf.printf "foo\n"; fun y -> pair_0 + pair_1 + y
and then:
let f x0 x1 = Printf.printf "foo\n"; fun y -> x0 + x1 + y
The allocation of the pair has been eliminated.
This transformation does not operate if it would cause the closure to contain more than twice as many closure variables as it did beforehand.
This transformation is enabled unless -no-unbox-specialised-args is provided.
It may become the case during compilation that one or more invariant arguments to a function become specialised to a particular value. When such values are themselves boxed the corresponding specialised arguments may be split into more specialised arguments corresponding to the projections out of the boxed value that occur within the function body. This transformation is called unboxing of specialised arguments. It is only applied when there is reasonable certainty that the boxed argument itself is unused within the function.
If the function in question is involved in a recursive group then unboxing of specialised arguments may be immediately replicated across the group based on the dataflow between invariant arguments.
Having been given the following code, the compiler will inline loop into f, and then observe inv being invariant and always the pair formed by adding 42 and 43 to the argument x of the function f.
let rec loop inv xs = match xs with | [] -> fst inv + snd inv | x::xs -> x + loop2 xs inv and loop2 ys inv = match ys with | [] -> 4 | y::ys -> y - loop inv ys let f x = Printf.printf "%d\n" (loop (x + 42, x + 43) [1; 2; 3])
Since the functions have sufficiently few arguments, more specialised arguments will be added. After some simplification one obtains:
let f x = let rec loop' xs inv_0 inv_1 = match xs with | [] -> inv_0 + inv_1 | x::xs -> x + loop2' xs inv_0 inv_1 and loop2' ys inv_0 inv_1 = match ys with | [] -> 4 | y::ys -> y - loop' ys inv_0 inv_1 in Printf.printf "%d\n" (loop' [1; 2; 3] (x + 42) (x + 43))
The allocation of the pair within f has been removed. (Since the two closures for loopâ and loop2â are constant they will also be lifted to toplevel with no runtime allocation penalty. This would also happen without having run the transformation to unbox specialise arguments.)
The transformation to unbox specialised arguments never introduces extra allocation.
The transformation will not unbox arguments if it would result in the original function having sufficiently many arguments so as to inhibit tail-call optimisation.
The transformation is implemented by creating a wrapper function that accepts the original arguments. Meanwhile, the original function is renamed and extra arguments are added corresponding to the unboxed specialised arguments; this new function is called from the wrapper. The wrapper will then be inlined at direct call sites. Indeed, all call sites will be direct unless -unbox-closures is being used, since they will have been generated by the compiler when originally specialising the function. (In the case of -unbox-closures other functions may appear with specialised arguments; in this case there may be indirect calls and these will incur a small penalty owing to having to bounce through the wrapper. The technique of direct call surrogates used for -unbox-closures is not used by the transformation to unbox specialised arguments.)
This transformation is not enabled by default. It may be enabled using the -unbox-closures flag.
The transformation replaces closure variables by specialised arguments. The aim is to cause more closures to become closed. It is particularly applicable, as a means of reducing allocation, where the function concerned cannot be inlined or specialised. For example, some non-recursive function might be too large to inline; or some recursive function might offer no opportunities for specialisation perhaps because its only argument is one of type unit.
At present there may be a small penalty in terms of actual runtime performance when this transformation is enabled, although more stable performance may be obtained due to reduced allocation. It is recommended that developers experiment to determine whether the option is beneficial for their code. (It is expected that in the future it will be possible for the performance degradation to be removed.)
In the following code (which might typically occur when g is too large to inline) the value of x would usually be communicated to the application of the + function via the closure of g.
let f x = let g y = x + y in (g [@inlined never]) 42
Unboxing of the closure causes the value for x inside g to be passed as an argument to g rather than through its closure. This means that the closure of g becomes constant and may be lifted to toplevel, eliminating the runtime allocation.
The transformation is implemented by adding a new wrapper function in the manner of that used when unboxing specialised arguments. The closure variables are still free in the wrapper, but the intention is that when the wrapper is inlined at direct call sites, the relevant values are passed directly to the main function via the new specialised arguments.
Adding such a wrapper will penalise indirect calls to the function (which might exist in arbitrary places; remember that this transformation is not for example applied only on functions the compiler has produced as a result of specialisation) since such calls will bounce through the wrapper. To mitigate this, if a function is small enough when weighed up against the number of free variables being removed, it will be duplicated by the transformation to obtain two versions: the original (used for indirect calls, since we can do no better) and the wrapper/rewritten function pair as described in the previous paragraph. The wrapper/rewritten function pair will only be used at direct call sites of the function. (The wrapper in this case is known as a direct call surrogate, since it takes the place of another functionâthe unchanged version used for indirect callsâat direct call sites.)
The -unbox-closures-factor command line flag, which takes an integer, may be used to adjust the point at which a function is deemed large enough to be ineligible for duplication. The benefit of duplication is scaled by the integer before being evaluated against the size.
In the following code, there are two closure variables that would typically cause closure allocations. One is called fv and occurs inside the function baz; the other is called z and occurs inside the function bar. In this toy (yet sophisticated) example we again use an attribute to simulate the typical situation where the first argument of baz is too large to inline.
let foo c = let rec bar zs fv = match zs with | [] -> [] | z::zs -> let rec baz f = function | [] -> [] | a::l -> let r = fv + ((f [@inlined never]) a) in r :: baz f l in (map2 (fun y -> z + y) [z; 2; 3; 4]) @ bar zs fv in Printf.printf "%d" (List.length (bar [1; 2; 3; 4] c))
The code resulting from applying -O3 -unbox-closures to this code passes the free variables via function arguments in order to eliminate all closure allocation in this example (aside from any that might be performed inside printf).
The simplification pass removes unused let bindings so long as their corresponding defining expressions have âno effectsâ. See the section âTreatment of effectsâ below for the precise definition of this term.
This transformation is analogous to the removal of let-expressions whose defining expressions have no effects. It operates instead on symbol bindings, removing those that have no effects.
This transformation is only enabled by default for specialised arguments. It may be enabled for all arguments using the -remove-unused-arguments flag.
The pass analyses functions to determine which arguments are unused. Removal is effected by creating a wrapper function, which will be inlined at every direct call site, that accepts the original arguments and then discards the unused ones before calling the original function. As a consequence, this transformation may be detrimental if the original function is usually indirectly called, since such calls will now bounce through the wrapper. (The technique of direct call surrogates used to reduce this penalty during unboxing of closure variables (see above) does not yet apply to the pass that removes unused arguments.)
This transformation performs an analysis across the whole compilation unit to determine whether there exist closure variables that are never used. Such closure variables are then eliminated. (Note that this has to be a whole-unit analysis because a projection of a closure variable from some particular closure may have propagated to an arbitrary location within the code due to inlining.)
Flambda performs a simple analysis analogous to that performed elsewhere in the compiler that can transform refs into mutable variables that may then be held in registers (or on the stack as appropriate) rather than being allocated on the OCaml heap. This only happens so long as the reference concerned can be shown to not escape from its defining scope.
This transformation discovers closure variables that are known to be equal to specialised arguments. Such closure variables are replaced by the specialised arguments; the closure variables may then be removed by the âremoval of unused closure variablesâ pass (see below).
The Flambda optimisers classify expressions in order to determine whether an expression:
This is done by forming judgements on the effects and the coeffects that might be performed were the expression to be executed. Effects talk about how the expression might affect the world; coeffects talk about how the world might affect the expression.
Effects are classified as follows:
There is a single classification for coeffects:
It is assumed in the compiler that, subject to data dependencies, expressions with neither effects nor coeffects may be reordered with respect to other expressions.
Compilation of modules that are able to be statically allocated (for example, the module corresponding to an entire compilation unit, as opposed to a first class module dependent on values computed at runtime) initially follows the strategy used for bytecode. A sequence of let-bindings, which may be interspersed with arbitrary effects, surrounds a record creation that becomes the module block. The Flambda-specific transformation follows: these bindings are lifted to toplevel symbols, as described above.
Especially when writing benchmarking suites that run non-side-effecting algorithms in loops, it may be found that the optimiser entirely elides the code being benchmarked. This behaviour can be prevented by using the Sys.opaque_identity function (which indeed behaves as a normal OCaml function and does not possess any âmagicâ semantics). The documentation of the Sys module should be consulted for further details.
The behaviour of the Flambda simplification pass means that certain unsafe operations, which may without Flambda or when using previous versions of the compiler be safe, must not be used. This specifically refers to functions found in the Obj module.
In particular, it is forbidden to change any value (for example using Obj.set_field or Obj.set_tag) that is not mutable. (Values returned from C stubs are always treated as mutable.) The compiler will emit warning 59 if it detects such a writeâbut it cannot warn in all cases. Here is an example of code that will trigger the warning:
let f x = let a = 42, x in (Obj.magic a : int ref) := 1; fst a
The reason this is unsafe is because the simplification pass believes that fst a holds the value 42; and indeed it must, unless type soundness has been broken via unsafe operations.
If it must be the case that code has to be written that triggers warning 59, but the code is known to actually be correct (for some definition of correct), then Sys.opaque_identity may be used to wrap the value before unsafe operations are performed upon it. Great care must be taken when doing this to ensure that the opacity is added at the correct place. It must be emphasised that this use of Sys.opaque_identity is only for exceptional cases. It should not be used in normal code or to try to guide the optimiser.
As an example, this code will return the integer 1:
let f x = let a = Sys.opaque_identity (42, x) in (Obj.magic a : int ref) := 1; fst a
However the following code will still return 42:
let f x = let a = 42, x in Sys.opaque_identity (Obj.magic a : int ref) := 1; fst a
High levels of inlining performed by Flambda may expose bugs in code thought previously to be correct. Take care, for example, not to add type annotations that claim some mutable value is always immediate if it might be possible for an unsafe operation to update it to a boxed value.
The following terminology is used in this chapter of the manual.
American fuzzy lop (âafl-fuzzâ) is a fuzzer, a tool for testing software by providing randomly-generated inputs, searching for those inputs which cause the program to crash.
Unlike most fuzzers, afl-fuzz observes the internal behaviour of the program being tested, and adjusts the test cases it generates to trigger unexplored execution paths. As a result, test cases generated by afl-fuzz cover more of the possible behaviours of the tested program than other fuzzers.
This requires that programs to be tested are instrumented to communicate with afl-fuzz. The native-code compiler âocamloptâ can generate such instrumentation, allowing afl-fuzz to be used against programs written in OCaml.
For more information on afl-fuzz, see the website at http://lcamtuf.coredump.cx/afl/.
The instrumentation that afl-fuzz requires is not generated by default, and must be explicitly enabled, by passing the -afl-instrument option to ocamlopt.
To fuzz a large system without modifying build tools, OCamlâs configure script also accepts the afl-instrument option. If OCaml is configured with afl-instrument, then all programs compiled by ocamlopt will be instrumented.
In rare cases, it is useful to control the amount of instrumentation generated. By passing the -afl-inst-ratio N argument to ocamlopt with N less than 100, instrumentation can be generated for only N% of branches. (See the afl-fuzz documentation on the parameter AFL_INST_RATIO for the precise effect of this).
As an example, we fuzz-test the following program, readline.ml:
let _ = let s = read_line () in match Array.to_list (Array.init (String.length s) (String.get s)) with ['s'; 'e'; 'c'; 'r'; 'e'; 't'; ' '; 'c'; 'o'; 'd'; 'e'] -> failwith "uh oh" | _ -> ()
There is a single input (the string âsecret codeâ) which causes this program to crash, but finding it by blind random search is infeasible.
Instead, we compile with afl-fuzz instrumentation enabled:
ocamlopt -afl-instrument readline.ml -o readline
Next, we run the program under afl-fuzz:
mkdir input echo asdf > input/testcase mkdir output afl-fuzz -m none -i input -o output ./readline
By inspecting instrumentation output, the fuzzer finds the crashing input quickly.
Note: To fuzz-test an OCaml program with afl-fuzz, passing the option -m none is required to disable afl-fuzzâs default 50MB virtual memory limit.
This chapter describes the runtime events tracing system which enables continuous extraction of performance information from the OCaml runtime with very low overhead. The system and interfaces are low-level and tightly coupled to the runtime implementation, it is intended for end-users to rely on tooling to consume and visualise data of interest.
Data emitted includes:
There are three main classes of events emitted by the runtime events system:
The runtime events tracing system is designed to be used in different contexts:
The runtime events tracing system logs events to a ring buffer. Consequently, old events are being overwritten by new events. Consumers can either continuously consume events or choose to only do so in response to some circumstance, e.g if a particular query or operation takes longer than expected to complete.
The runtime tracing system conceptually consists of two parts: 1) the probes which emit events and 2) the events transport that ingests and transports these events.
Probes collect events from the runtime system. These are further split in to two sets: 1) probes that are always available and 2) probes that are only available in the instrumented runtime. Probes in the instrumented runtime are primarily of interest to developers of the OCaml runtime and garbage collector and, at present, only consist of major heap allocation size counter events.
The full set of events emitted by probes and their documentation can be found in Module Runtime_events.
The events transport part of the system ingests events emitted by the probes and makes them available to consumers.
Events are transported using a data structure known as a ring buffer. This data structure consists of two pointers into a linear backing array, the tail pointer points to a location where new events can be written and the head pointer points to the oldest event in the buffer that can be read. When insufficient space is available in the backing array to write new events, the head pointer is advanced and the oldest events are overwritten by new ones.
The ring buffer implementation used in runtime events can be written by at most one producer at a time but can be read simultaneously by multiple consumers without coordination from the producer. There is a unique ring buffer for every running domain and, on domain termination, ring buffers may be re-used for newly spawned domains. The ring buffers themselves are stored in a memory-mapped file with the processes identifier as the name and the extension .events, this enables them to be read from outside the main OCaml process. See Runtime_events for more information.
The runtime event tracing system provides both OCaml and C APIs which are cursor-based and polling-driven. The high-level process for consuming events is as follows:
We start with a simple example that prints the name, begin and end times of events emitted by the runtime event tracing system:
let runtime_begin _ ts phase = Printf.printf "Begin\t%s\t%Ld\n" (Runtime_events.runtime_phase_name phase) (Runtime_events.Timestamp.to_int64 ts) let runtime_end _ ts phase = Printf.printf "End\t%s\t%Ld\n" (Runtime_events.runtime_phase_name phase) (Runtime_events.Timestamp.to_int64 ts) let () = Runtime_events.start (); let cursor = Runtime_events.create_cursor None in let callbacks = Runtime_events.Callbacks.create ~runtime_begin ~runtime_end () in while true do let list_ref = ref [] in (* for later fake GC work *) for _ = 1 to 100 do (* here we do some fake GC work *) list_ref := []; for _ = 1 to 10 do list_ref := (Sys.opaque_identity(ref 42)) :: !list_ref done; Gc.full_major (); done; ignore(Runtime_events.read_poll cursor callbacks None); Unix.sleep 1 done
The next step is to compile and link the program with the runtime_events library. This can be done as follows:
ocamlopt -I +runtime_events -I +unix unix.cmxa runtime_events.cmxa example.ml -o example
When using the dune build system, this example can be built as follows:
(executable (name example) (modules example) (libraries unix runtime_events))
Running the compiled binary of the example gives an output similar to:
Begin explicit_gc_full_major 24086187297852 Begin stw_leader 24086187298594 Begin minor 24086187299404 Begin minor_global_roots 24086187299807 End minor_global_roots 24086187331461 Begin minor_remembered_set 24086187331631 Begin minor_finalizers_oldify 24086187544312 End minor_finalizers_oldify 24086187544704 Begin minor_remembered_set_promote 24086187544879 End minor_remembered_set_promote 24086187606414 End minor_remembered_set 24086187606584 Begin minor_finalizers_admin 24086187606854 End minor_finalizers_admin 24086187607152 Begin minor_local_roots 24086187607329 Begin minor_local_roots_promote 24086187609699 End minor_local_roots_promote 24086187610539 End minor_local_roots 24086187610709 End minor 24086187611746 Begin minor_clear 24086187612238 End minor_clear 24086187612580 End stw_leader 24086187613209 ...
This is an example of self-monitoring, where a program explicitly starts listening to runtime events and monitors itself.
For external monitoring, a program does not need to be aware of the existence of runtime events. Runtime events can be controlled via the environment variable OCAML_RUNTIME_EVENTS_START which, when set, will cause the runtime tracing system to be started at program initialization.
We could remove Runtime_events.start (); from the previous example and, instead, call the program as below to produce the same result:
OCAML_RUNTIME_EVENTS_START=1 ./example
Environment variables can be used to control different aspects of the runtime event tracing system. The following environment variables are available:
The size of the runtime events ring buffers can be configured via OCAMLRUNPARAM, see section 15.2 for more information.
To receive events that are only available in the instrumented runtime, the OCaml program needs to be compiled and linked against the instrumented runtime. For our example program from earlier, this is achieved as follows:
ocamlopt -runtime-variant i -I +runtime_events -I +unix unix.cmxa runtime_events.cmxa example.ml -o example
And for dune:
(executable (name example) (modules example) (flags "-runtime-variant=i") (libraries unix runtime_events))
Programmatic access to events is intended primarily for writers of observability libraries and tooling that end-users use. The flexible API enables use of the performance data from runtime events for logging and monitoring purposes.
In this section we cover several utilities in the runtime_events_tools package which provide simple ways of extracting and summarising data from runtime events. The trace utility in particular produces similar data to the previous âeventlogâ instrumentation system available in OCaml 4.12 to 4.14.
First, install runtime_events_tools in an OCaml 5.0+ opam switch:
opam install runtime_events_tools
This should install the olly tool in your path. You can now generate runtime traces for programs compiled with OCaml 5.0+ using the trace subcommand:
olly trace trace.json 'your_program.exe .. args ..'
Runtime tracing data will be generated in the json Trace Event Format to trace.json. This can then be loaded into the Chrome tracing viewer or into Perfetto to visualize the collected trace.
The olly utility also includes a latency subcommand which consumes runtime events data and on program completion emits a parseable histogram summary of pause durations. It can be run as follows:
olly latency 'your_program.exe .. args ..'
This should produce an output similar to the following:
GC latency profile: #[Mean (ms): 2.46, Stddev (ms): 3.87] #[Min (ms): 0.01, max (ms): 9.17] Percentile Latency (ms) 25.0000 0.01 50.0000 0.23 60.0000 0.23 70.0000 0.45 75.0000 0.45 80.0000 0.45 85.0000 0.45 90.0000 9.17 95.0000 9.17 96.0000 9.17 97.0000 9.17 98.0000 9.17 99.0000 9.17 99.9000 9.17 99.9900 9.17 99.9990 9.17 99.9999 9.17 100.0000 9.17
(Introduced in OCaml 4.14)
Note: this feature is considered experimental, and its interface may evolve, with user feedback, in the next few releases of the language.
Consider this natural implementation of the List.map function:
A well-known limitation of this implementation is that the recursive call, map f xs, is not in tail position. The runtime needs to remember to continue with y :: r after the call returns a value r, therefore this function consumes some amount of call-stack space on each recursive call. The stack usage of map f li is proportional to the length of li. This is a correctness issue for large lists on operating systems with limited stack space â the dreaded Stack_overflow exception.
In this implementation of map, the recursive call happens in a position that is not a tail position in the program, but within a datatype constructor application that is itself in tail position. We say that those positions, that are composed of tail positions and constructor applications, are tail modulo constructor (TMC) positions â we sometimes write tail modulo cons for brevity.
It is possible to rewrite programs such that tail modulo cons positions become tail positions; after this transformation, the implementation of map above becomes tail-recursive, in the sense that it only consumes a constant amount of stack space. The OCaml compiler implements this transformation on demand, using the [@tail_mod_cons] or [@ocaml.tail_mod_cons] attribute on the function to transform.
This transformation only improves calls in tail-modulo-cons position, it does not improve recursive calls that do not fit in this fragment:
It is of course possible to use the [@tail_mod_cons] transformation on functions that contain some recursive calls in tail-modulo-cons position, and some calls in other, arbitrary positions. Only the tail calls and tail-modulo-cons calls will happen in constant stack space.
This feature is provided as an explicit program transformation, not an implicit optimization. It is annotation-driven: the user is expected to express their intent by adding annotations in the program using attributes, and will be asked to do so in any ambiguous situation.
We expect it to be used mostly by advanced OCaml users needing to get some guarantees on the stack-consumption behavior of their programs. Our recommendation is to use the [@tailcall] annotation on all callsites that should not consume any stack space. [@tail_mod_cons] extends the set of functions on which calls can be annotated to be tail calls, helping establish stack-consumption guarantees in more cases.
A standard approach to get a tail-recursive version of List.map is to use an accumulator to collect output elements, and reverse it at the end of the traversal.
This version is tail-recursive, but it is measurably slower than the simple, non-tail-recursive version. In contrast, the tail-mod-cons transformation provides an implementation that has comparable performance to the original version, even on small inputs.
Beware that the tail-modulo-cons transformation has an effect on evaluation order: the constructor argument that is transformed into tail-position will always be evaluated last. Consider the following example:
Due to the [@tail_mod_cons] transformation, the calls to f front and f rear will be evaluated before map f body. In particular, this is likely to be different from the evaluation order of the unannotated version. (The evaluation order of constructor arguments is unspecified in OCaml, but many implementations typically use left-to-right or right-to-left.)
This effect on evaluation order is one of the reasons why the tail-modulo-cons transformation has to be explicitly requested by the user, instead of being applied as an automatic optimization.
Other program transformations, in particular a transformation to continuation-passing style (CPS), can make all functions tail-recursive, instead of targeting only a small fragment. Some reasons to provide builtin support for the less-general tail-mod-cons are as follows:
It may happen that several arguments of a constructor are recursive calls to a tail-modulo-cons function. The transformation can only turn one of these calls into a tail call. The compiler will not make an implicit choice, but ask the user to provide an explicit disambiguation.
Consider this type of syntactic expressions (assuming some pre-existing type var of expression variables):
Consider a map function on variables. The direct definition has two recursive calls inside arguments of the Let constructor, so it gets rejected as ambiguous.
To disambiguate, the user should add a [@tailcall] attribute to the recursive call that should be transformed to tail position:
Be aware that the resulting function is not tail-recursive, the recursive call on def will consume stack space. However, expression trees tend to be right-leaning (lots of Let in sequence, rather than nested inside each other), so putting the call on body in tail position is an interesting improvement over the naive definition: it gives bounded stack space consumption if we assume a bound on the nesting depth of Let constructs.
One would also get an error when using conflicting annotations, asking for two of the constructor arguments to be put in tail position:
Due to the nature of the tail-mod-cons transformation (see Sectionââ26.3 for a presentation of transformation):
The fact that calls in tail position in the source program may become non-tail calls if they go from a tail-mod-cons to a non-tail-mod-cons function is surprising, and the transformation will warn about them.
For example:
Here the append_flatten helper is not annotated with [@tail_mod_cons], so the calls append_flatten xs xss and flatten xss will not be tail calls. The correct fix here is to annotate append_flatten to be tail-mod-cons.
The same warning occurs when append_flatten is a non-tail-mod-cons function of the same recursive group; using the tail-mod-cons transformation is a property of individual functions, not whole recursive groups.
Again, the fix is to specialize append_flatten as well:
Non-recursive functions can also be annotated [@tail_mod_cons]; this is typically useful for local bindings to recursive functions.
Incorrect version:
Recommended fix:
In other cases, there is either no benefit in making the called function tail-mod-cons, or it is not possible: for example, it is a function parameter (the transformation only works with direct calls to known functions).
For example, consider a substitution function on binary trees:
Here f is a function parameter, not a direct call, and the current implementation is strictly first-order, it does not support tail-mod-cons arguments. In this case, the user should indicate that they realize this call to f v is not, in fact, in tail position, by using (f[@tailcall false]) v.
To use this advanced feature, it helps to be aware that the function transformation produces a specialized function in destination-passing-style.
Recall our map example:
Below is a description of the transformed program in pseudo-OCaml notation: some operations are not expressible in OCaml source code. (The transformation in fact happens on the Lambda intermediate representation of the OCaml compiler.)
let rec map f l = match l with | [] -> [] | x :: xs -> let y = f x in let dst = y ::{mutable} Hole in map_dps f xs dst 1; dst and map_dps f l dst idx = match l with | [] -> dst.idx <- [] | x :: xs -> let y = f x in let dst' = y ::{mutable} Hole in dst.idx <- dst'; map_dps f xs dst' 1
The source version of map gets transformed into two functions, a direct-style version that is also called map, and a destination-passing-style version (DPS) called map_dps. The destination-passing-style version does not return a result directly, instead it writes it into a memory location specified by two additional function parameters, dst (a memory block) and i (a position within the memory block).
The source call y :: map f xs gets transformed into the creation of a mutable block y ::{mutable} Hole, whose second parameter is an un-initialized hole. The block is then passed to map_dps as a destination parameter (with offset 1).
Notice that map does not call itself recursively, it calls map_dps. Then, map_dps calls itself recursively, in a tail-recursive way.
The call from map to map_dps is not a tail call (this is something that we could improve in the future); but this call happens only once when invoking map f l, with all list elements after the first one processed in constant stack by map_dps.
This explains the âgetting out of tail-mod-consâ subtleties. Consider our previous example involving mutual recursion between flatten and append_flatten.
let[@tail_mod_cons] rec flatten l = match l with | [] -> [] | xs :: xss -> append_flatten xs xss
The call to append_flatten, which syntactically appears in tail position, gets transformed differently depending on whether the function has a destination-passing-style version available, that is, whether it is itself annotated [@tail_mod_cons]:
(* if append_flatten_dps exists *) and flatten_dps l dst i = match l with | [] -> dst.i <- [] | xs :: xss -> append_flatten_dps xs xss dst i (* if append_flatten_dps does not exist *) and rec flatten_dps l dst i = match l with | [] -> dst.i <- [] | xs :: xss -> dst.i <- append_flatten xs xss
If append_flatten does not have a destination-passing-style version, the call gets transformed to a non-tail call.
Just like tail calls in general, the notion of tail-modulo-constructor position is purely syntactic; some simple refactoring will move calls out of tail-modulo-constructor position.
When a function gets transformed with tail-mod-cons, two definitions are generated, one providing a direct-style interface and one providing the destination-passing-style version. However, not all calls to this function in tail-modulo-cons position will use the destination-passing-style version and become tail calls:
Consider the call Option.map foo x for example: even if foo is called in tail-mod-cons position within the definition of Option.map, that call will never become a tail call. (This would be the case even if the call to Option.map was inside the Option module.)
In general this limitation is not a problem for recursive functions: the first call from an outside module or a higher-order function will consume stack space, but further recursive calls in tail-mod-cons position will get optimized. For example, if List.map is defined as a tail-mod-cons function, calls from outside the List module will not become tail calls when in tail positions, but the recursive calls within the definition of List.map are in tail-modulo-cons positions and do become tail calls: processing the first element of the list will consume stack space, but all further elements are handled in constant space.
These limitations may be an issue in more complex situations where mutual recursion happens between functions, with some functions not annotated tail-mod-cons, or defined across different modules, or called indirectly, for example through function parameters.
OCaml performs an implicit optimization for âtupledâ functions, which take a single parameter that is a tuple: let f (x, y, z) = .... Direct calls to these functions with a tuple literal argument (like f (a, b, c)) will call the âtupledâ function by passing the parameters directly, instead of building a tuple of them. Other calls, either indirect calls or calls passing a more complex tuple value (like let t = (a, b, c) in f t) are compiled as âinexactâ calls that go through a wrapper.
The [@tail_mod_cons] transformation supports tupled functions, but will only optimize âexactâ calls in tail position; direct calls to something other than a tuple literal will not become tail calls. The user can manually unpack a tuple to force a call to be âexactâ: let (x, y, z) = t in f (x, y, z). If there is any doubt as to whether a call can be tail-mod-cons-optimized or not, one can use the [@tailcall] attribute on the called function, which will warn if the transformation is not possible.
PartââIV |
This chapter describes the OCaml core library, which is composed of declarations for built-in types and exceptions, plus the module Stdlib that provides basic operations on these built-in types. The Stdlib module is special in two ways:
The following built-in types and predefined exceptions are always defined in the compilation environment, but are not part of any module. As a consequence, they can only be referred by their short names.
type int
The type of integer numbers.
type char
The type of characters.
type bytes
The type of (writable) byte sequences.
type string
The type of (read-only) character strings.
type float
The type of floating-point numbers.
type bool = false | true
The type of booleans (truth values).
type unit = ()
The type of the unit value.
type exn
The type of exception values.
type 'a array
The type of arrays whose elements have type 'a.
type 'a list = [] | :: of 'a * 'a list
The type of lists whose elements have type 'a.
type 'a option = None | Some of 'a
The type of optional values of type 'a.
type int32
The type of signed 32-bit integers. Literals for 32-bit integers are suffixed by l. See the Int32 module.
type int64
The type of signed 64-bit integers. Literals for 64-bit integers are suffixed by L. See the Int64 module.
type nativeint
The type of signed, platform-native integers (32 bits on 32-bit processors, 64 bits on 64-bit processors). Literals for native integers are suffixed by n. See the Nativeint module.
type ('a, 'b, 'c, 'd, 'e, 'f) format6
The type of format strings. 'a is the type of the parameters of the format, 'f is the result type for the printf-style functions, 'b is the type of the first argument given to %a and %t printing functions (see module Printf), 'c is the result type of these functions, and also the type of the argument transmitted to the first argument of kprintf-style functions, 'd is the result type for the scanf-style functions (see module Scanf), and 'e is the type of the receiver function for the scanf-style functions.
type 'a lazy_t
This type is used to implement the Lazy module. It should not be used directly.
exception Match_failure of (string * int * int)
Exception raised when none of the cases of a pattern-matching apply. The arguments are the location of the match keyword in the source code (file name, line number, column number).
exception Assert_failure of (string * int * int)
Exception raised when an assertion fails. The arguments are the location of the assert keyword in the source code (file name, line number, column number).
exception Invalid_argument of string
Exception raised by library functions to signal that the given arguments do not make sense. The string gives some information to the programmer. As a general rule, this exception should not be caught, it denotes a programming error and the code should be modified not to trigger it.
exception Failure of string
Exception raised by library functions to signal that they are
undefined on the given arguments. The string is meant to give some
information to the programmer; you must not pattern match on
the string literal because it may change in future versions (use
Failure _
instead).
exception Not_found
Exception raised by search functions when the desired object could not be found.
exception Out_of_memory
Exception raised by the garbage collector when there is insufficient memory to complete the computation. (Not reliable for allocations on the minor heap.)
exception Stack_overflow
Exception raised by the bytecode interpreter when the evaluation stack reaches its maximal size. This often indicates infinite or excessively deep recursion in the userâs program. Before 4.10, it was not fully implemented by the native-code compiler.
exception Sys_error of string
Exception raised by the input/output functions to report an
operating system error. The string is meant to give some
information to the programmer; you must not pattern match on
the string literal because it may change in future versions (use
Sys_error _
instead).
exception End_of_file
Exception raised by input functions to signal that the end of file has been reached.
exception Division_by_zero
Exception raised by integer division and remainder operations when their second argument is zero.
exception Sys_blocked_io
A special case of Sys_error raised when no I/O is possible on a non-blocking I/O channel.
exception Undefined_recursive_module of (string * int * int)
Exception raised when an ill-founded recursive module definition is evaluated. (See sectionââ12.2.) The arguments are the location of the definition in the source code (file name, line number, column number).
This chapter describes the functions provided by the OCaml standard library. The modules from the standard library are automatically linked with the userâs object code files by the ocamlc command. Hence, these modules can be used in standalone programs without having to add any .cmo file on the command line for the linking phase. Similarly, in interactive use, these globals can be used in toplevel phrases without having to load any .cmo file in memory.
Unlike the core Stdlib module, submodules are not automatically âopenedâ when compilation starts, or when the toplevel system is launched. Hence it is necessary to use qualified identifiers to refer to the functions provided by these modules, or to add open directives.
This chapter describes the OCaml front-end, which declares the abstract syntax tree used by the compiler, provides a way to parse, print and pretty-print OCaml code, and ultimately allows one to write abstract syntax tree preprocessors invoked via the -ppx flag (see chaptersââ13 andââ16).
It is important to note that the exported front-end interface follows the evolution of the OCaml language and implementation, and thus does not provide any backwards compatibility guarantees.
The front-end is a part of compiler-libs library. Programs that use the compiler-libs library should be built as follows:
ocamlfind ocamlc other options -package compiler-libs.common other files ocamlfind ocamlopt other options -package compiler-libs.common other files
Use of the ocamlfind utility is recommended. However, if this is not possible, an alternative method may be used:
ocamlc other options -I +compiler-libs ocamlcommon.cma other files ocamlopt other options -I +compiler-libs ocamlcommon.cmxa other files
For interactive use of the compiler-libs library, start ocaml and
type
#load "compiler-libs/ocamlcommon.cma";;.
The unix library makes many Unix system calls and system-related library functions available to OCaml programs. This chapter describes briefly the functions provided. Refer to sections 2ââandââ3 of the Unix manual for more details on the behavior of these functions.
Not all functions are provided by all Unix variants. If some functions are not available, they will raise Invalid_arg when called.
Programs that use the unix library must be linked as follows:
ocamlc other options -I +unix unix.cma other files ocamlopt other options -I +unix unix.cmxa other files
For interactive use of the unix library, do:
ocamlmktop -o mytop -I +unix unix.cma ./mytop
or (if dynamic linking of C libraries is supported on your platform), start ocaml and type
Windows:â The Cygwin port of OCaml fully implements all functions from the Unix module. The native Win32 ports implement a subset of them. Below is a list of the functions that are not implemented, or only partially implemented, by the Win32 ports. Functions not mentioned are fully implemented and behave as described previously in this chapter.
Functions | Comment |
fork | not implemented, use create_process or threads |
wait | not implemented, use waitpid |
waitpid | can only wait for a given PID, not any child process |
getppid | not implemented (meaningless under Windows) |
nice | not implemented |
truncate, ftruncate | implemented (since 4.10.0) |
link | implemented (since 3.02) |
fchmod | not implemented |
chown, fchown | not implemented (make no sense on a DOS file system) |
umask | not implemented |
access | execute permission X_OK cannot be tested, it just tests for read permission instead |
chroot | not implemented |
mkfifo | not implemented |
symlink, readlink | implemented (since 4.03.0) |
kill | partially implemented (since 4.00.0): only the sigkill signal is implemented |
sigprocmask, sigpending, sigsuspend | not implemented (no inter-process signals on Windows |
pause | not implemented (no inter-process signals in Windows) |
alarm | not implemented |
times | partially implemented, will not report timings for child processes |
getitimer, setitimer | not implemented |
getuid, geteuid, getgid, getegid | always return 1 |
setuid, setgid, setgroups, initgroups | not implemented |
getgroups | always returns [|1|] (since 2.00) |
getpwnam, getpwuid | always raise Not_found |
getgrnam, getgrgid | always raise Not_found |
type socket_domain | PF_INET is fully supported; PF_INET6 is fully supported (since 4.01.0); PF_UNIX is supported since 4.14.0, but only works on Windows 10 1803 and later. |
establish_server | not implemented; use threads |
terminal functions (tc*) | not implemented |
setsid | not implemented |
The str library provides high-level string processing functions, some based on regular expressions. It is intended to support the kind of file processing that is usually performed with scripting languages such as awk, perl or sed.
Programs that use the str library must be linked as follows:
ocamlc other options -I +str str.cma other files ocamlopt other options -I +str str.cmxa other files
For interactive use of the str library, do:
ocamlmktop -o mytop str.cma ./mytop
or (if dynamic linking of C libraries is supported on your platform), start ocaml and type
The runtime_events library provides an API for consuming runtime tracing and metrics information from the runtime. See chapterââ25 for more information.
Programs that use runtime_events must be linked as follows:
ocamlc -I +runtime_events other options unix.cma runtime_events.cma other files ocamlopt -I +runtime_events other options unix.cmxa runtime_events.cmxa other files
Compilation units that use the runtime_events library must also be compiled with the -I +runtime_events option (see chapterââ13).
The threads library allows concurrent programming in OCaml. It provides multiple threads of control (also called lightweight processes) that execute concurrently in the same memory space. Threads communicate by in-place modification of shared data structures, or by sending and receiving data on communication channels.
The threads library is implemented on top of the threading facilities provided by the operating system: POSIX 1003.1c threads for Linux, MacOS, and other Unix-like systems; Win32 threads for Windows. Only one thread at a time is allowed to run OCaml code on a particular domainââ9.5.1. Hence, opportunities for parallelism are limited to the parts of the program that run system or C library code. However, threads provide concurrency and can be used to structure programs as several communicating processes. Threads also efficiently support concurrent, overlapping I/O operations.
Programs that use threads must be linked as follows:
ocamlc -I +unix -I +threads other options unix.cma threads.cma other files ocamlopt -I +unix -I +threads other options unix.cmxa threads.cmxa other files
Compilation units that use the threads library must also be compiled with the -I +threads option (see chapterââ13).
The dynlink library supports type-safe dynamic loading and linking of bytecode object files (.cmo and .cma files) in a running bytecode program, or of native plugins (usually .cmxs files) in a running native program. Type safety is ensured by limiting the set of modules from the running program that the loaded object file can access, and checking that the running program and the loaded object file have been compiled against the same interfaces for these modules. In native code, there are also some compatibility checks on the implementations (to avoid errors with cross-module optimizations); it might be useful to hide .cmx files when building native plugins so that they remain independent of the implementation of modules in the main program.
Programs that use the dynlink library simply need to include the dynlink library directory with -I +dynlink and link dynlink.cma or dynlink.cmxa with their object files and other libraries.
Note: in order to insure that the dynamically-loaded modules have access to all the libraries that are visible to the main program (and not just to the parts of those libraries that are actually used in the main program), programs using the dynlink library should be linked with -linkall.
This chapter describes three libraries which were formerly part of the OCaml distribution (Graphics, Num, and LablTk), and a library which has now become part of OCamlâs standard library, and is documented there (Bigarray).
Since OCaml 4.09, the graphics library is distributed as an external package. Its new home is:
https://github.com/ocaml/graphics
If you are using the opam package manager, you should install the corresponding graphics package:
opam install graphics
Before OCaml 4.09, this package simply ensures that the graphics library was installed by the compiler, and starting from OCaml 4.09 this package effectively provides the graphics library.
As of OCaml 4.07, the bigarray library has been integrated into OCamlâs standard library.
The bigarray functionality may now be found in the standard library Bigarray module, except for the map_file function which is now part of the Unix library. The documentation has been integrated into the documentation for the standard library.
The legacy bigarray library bundled with the compiler is a compatibility library with exactly the same interface as before, i.e. with map_file included.
We strongly recommend that you port your code to use the standard library version instead, as the changes required are minimal.
If you choose to use the compatibility library, you must link your programs as follows:
ocamlc other options bigarray.cma other files ocamlopt other options bigarray.cmxa other files
For interactive use of the bigarray compatibility library, do:
ocamlmktop -o mytop bigarray.cma ./mytop
or (if dynamic linking of C libraries is supported on your platform), start ocaml and type #load "bigarray.cma";;.
The num library implements integer arithmetic and rational arithmetic in arbitrary precision. It was split off the core OCaml distribution starting with the 4.06.0 release, and can now be found at https://github.com/ocaml/num.
New applications that need arbitrary-precision arithmetic should use the Zarith library (https://github.com/ocaml/Zarith) instead of the Num library, and older applications that already use Num are encouraged to switch to Zarith. Zarith delivers much better performance than Num and has a nicer API.
Since OCaml version 4.02, the OCamlBrowser tool and the Labltk library are distributed separately from the OCaml compiler. The project is now hosted at https://github.com/garrigue/labltk.
PartââV |
This document was translated from LATEX by HEVEA.