HimML runs on Unix systems, and Amigas. Previous versions also worked on Apple Macintoshes, but this one lacks some functions. On a Mac, the only way to launch a HimML session is to double-click on the HimML icon; a text window opens, asking to enter Unix command-style arguments: enter the arguments to the HimML command, except the command’s name itself. From then on, all work happens in this console, at toplevel, as on Amiga and Unix systems.
On Amigas and Unix boxes, type himml followed by a list of arguments. The legal arguments are obtained by typing himml ?, to which HimML should answer:
and exit. Launching HimML without any arguments is fine. There are other HimML tools, used to compile, link and execute bytecode compiled files; they are listed at the end of this section.
To load a file, the use keyword may be used; it begins a declaration, just like val or type, that asks HimML to load a file and interpret it as if it were input at the keyboard (except it does not use stdin. The path that use uses can be extended on the command line by the -path switch, or inside HimML by changing the contents of usepath : string list ref, which is a reference to the list of volumes or directories in which to search for files, from left to right.
The explanation of the various options are:
-mem can be used only once on the command line.
-maxcells can be used only once on the command line.
-nthreads can be used only once on the command line.
-threadsize can be used only once on the command line.
You may wish to give it a value higher than the default (typically 23227) for memory-hungry programs. A rule of thumb is to evaluate how many cells your program needs (one tuple is one cell, a n-element list uses up n cells, a n-element set or map uses up some 2n cells; or, more practically, run HimML with the -gctrace option on, and look at the statistics on total live homo cells), and then to divide this number by, say, 3.
On the other hand, having a big table for few values makes for longer garbage collection times, so you may also wish to reduce this value on programs that do not use much memory, or which only allocate very short-lived data.
You may wish to raise its value if your program builds and keeps lots of integers in data-structures, or you may wish to decrease its value if you use few integers or only allocate them for temporary computations.
You may wish to raise its value if your program builds and keeps lots of numbers in data-structures, or you may wish to decrease its value if you use few numerical quantities or only allocate them for temporary computations. (In particular, there is no need to increase its value for ordinary number-crunching, except if you are handling big matrices.)
You may wish to raise its value if your program builds and keeps lots of strings, or does a lot of text processing. It is not advised to reduce its value, as many strings are used internally by the compiler and the type-checker.
You may wish to raise its value if your program builds and keeps lots of tuples, records and arrays, or you may wish to decrease its value in case you don’t use many of these structures.
-cmd can be used only once on the command line, and is incompatible with -init.
-init can be used only once on the command line, and is incompatible with -cmd.
The latter means the following: one garbage collection has just been done (if the system crashes during a GC, you will just get GC...), the number of cells in the system is 54272, of which 4608 + 512 = 5120 are considered young (i.e., will be considered as highly likely to become garbageable at the next GC); among these, 4019 + 25 = 4044 are live, i.e., not free. And the system as a whole also contains 4019 + 25 = 4044 live cells. The purpose of the “homo” and “hetero” figures is to separate between homogeneous cells (pairs, integers, maps, reals, complexes, etc.) and heterogeneous cells (which point to non-first class data, like strings, which point to an area of memory where its contents lies, or arrays, or n-tuples with n ≥ 3, or records with at least 2 fields, which are allocated as a cell pointing to an internal array of values). For hetero cells, the amount of additional memory freed is shown: 43 bytes of strings, none of patcheckbits (an internal structure of the compiler), 360 bytes of stacks (i.e., of local thread structure), 6608 bytes of vectors, and 16 externals were freed. Externals are interfaces between HimML and non-HimML data, typically files. The time taken to do this garbage collection was 0.089s., and the heap had only one generation (the so-called young generation) before garbage collection, and is segmented in two generations afterwards. It allocated 7 local threads since startup (it allocates at least one at each toplevel command), most of which have been freed since then. And it allocated and freed so many bytes of temporary storage (typically for local HimML variables during execution of code), of which 52400 remain allocated at the end of garbage collection.
Calling major_gc invokes a major collection, and the argument passed to major_gc is printed at the start of the information block, e.g.:
Finally, using -nodebug will direct the bytecode compiler not to output any debugging information. This can be used to produce stripped modules (i.e., without any debugging information), typically to save space or to prevent or make reverse engineering of production code difficult.
Note that, if you compile a module with -nodebug, and execute it under HimML (with the debugger on), then typing control-C or raising non-benign exceptions will enter the debugger, but the debugger won’t be able to extract any information from the compiled code.
As said earlier, the HimML distribution includes other tools to compile, link and run bytecode compiled files:
himml -c foo.ml
compiles "foo.ml", and produces a bytecode file "foo.mlx". This does exactly the same thing as typing #compile "foo.ml" at the HimML prompt; typing open "foo.ml" does almost the same thing, except HimML will then print a list of all types and identifiers defined in "foo.ml", and will declare them in the current toplevel.
himmllnk archive-file file1.mlx… filen.mlx
to create an archive file—in which case it is recommended to give it a .mla extension—, or a bytecode executable file.
A typical use of himmldep is to run
at the (Unix) command-line. This will produce a file .depend listing all dependencies between files, which can be used by make to help reconstruct all proper .mlx files.
In fact, the standard makefile for projects using HimML is as follows:
The first line (works only with GNU make) tells make that to build or rebuild any bytecode file, say foo.mlx, it should call himml -c foo.ml. The OBJS = line is a macro definition, stating what bytecode files we would like to build. The prog: line states the main rule, which is to build a HimML executable file or a library file prog, by calling himmllnk to link all bytecode files in OBJS. The clean and cleanall are targets meant to remove compiled files, and are called with make clean or make cleanall respectively. Dependencies are recomputed by typing make depend, which creates dependencies in the .depend file; the latter is in turn included in the current makefile using GNU make’s include directive.
If you don’t have GNU make, then you cannot include .depend, and you will have to copy its contents manually at the end of makefile. Additionally, the %.mlx : %.ml line should be replaced by:
HimML contains a debugger, as shown by consulting the set #debugging features, which should be non-empty. It can be called by the break function:
Another way of entering the debugger is when an exception is raised but not caught by any handler.
There are two ways of entering the debugger. These are shown on entry by a message, stop on break (we entered the debugger through break, or by typing control-C or DEL when evaluating an expression), or stop at…(we entered the debugger at a breakpoint located just before the execution of an expression).
In any case, the debugger enters a command loop, under which you can examine the values of expressions, see the call stack, step through code, set breakpoints, resume or abort execution. The debugger presents a prompt, normally (debug). It then waits for a line to be typed, followed by a carriage return, and executes the corresponding command. These commands are:
The c command may take an argument, which should be a HimML expression e. This expression is parsed, type-checked, compiled and evaluated in the current environment (which is the environment as seen from the point where execution was stopped; but see the u, d and w commands). No breakpoint in the expression is ever triggered, and interrupting its evaluation by control-C or DEL just cancels the evaluation and returns to the debugger, without, say, entering a recursive level of debugging.
If the expression successfully evaluates, the resulting value v is returned as the result of the expression on which execution was stopped. This means the following: on a stop on break, the return value is replaced by v, and execution is resumed with this new value instead of the previously computed one; on a stop on entry to an expression e′, the expression e′ is not evaluated, and v masquerades as the value that e′ should have (this is useful when e′ is not a reliable piece of code, but we know in advance what it should return and we don’t want to lose time debugging e′).
Note that, although there is some type-checking involved in the evaluation of the expression e, this only provides a relative, not absolute, level of safety. That is, type-checking under the debugger may catch some type errors, but not all (in short, the debugger is not type-safe). For example:
will enter the debugger at the raise expression, and e will be coerced to the finest type the debugger can infer from the definition of inv alone, that is, '#a^ ~1 (since inv : '#a -> '#a^ ~1). However, the only allowable type in the current context would be num. So if you type c 35‘cm, inv will return with a mostly unpredicatable value. Even more seriously, in other cases this may cause the HimML system to crash, although this risk is limited because there are run-time safeguards against that in the HimML system when the debugger is present.
The argument expression is parsed, type-checked, compiled and evaluated in the current environment (which is the environment as seen from the point where execution was stopped; but see the u, d and w commands). No breakpoint in the expression is ever triggered, and interrupting its evaluation by control-C or DEL just cancels the evaluation and returns to the debugger, without, say, entering a recursive level of debugging.
As for the c command, the type-checker can only provide a relative, not absolute, level of type-safety, and it is possible to evaluate non-sensical expressions because the type-checker cannot hope to detect all possible type errors. This is because the debugger uses only the types that have been inferred statically, but it cannot specialize them to the real run-time types.
After successfully setting the breakpoint, it is associated a breakpoint number, which is then shown between square brackets, followed by information about the breakpoint location. If no breakpoint could be set at the indicated location, the debugger won’t install any breakpoint, and will say so.
The commands u and d move up and down the stack respectively, and the current level is shown in the stack display by being preprended the > character. This shows the environment that will be used to type-check, compile and evaluate expressions, as with the p print command or the c continue command. By default, the current level is 1 (bottom of stack).
By default, w only shows 10 levels of stack. This is to avoid huge stack dumps in the case of infinite recursions. You can specify another depth limit by giving w a numeric argument.
This also sets the current function to the specified one, so that the b command can then be issued to set a breakpoint in this function without having to retype the name of the function.
The way that the interpreter gives control to the debugger is by means of code points, which are points in the code where the compiler adds extra instructions. These instructions usually do nothing. When you set a breakpoint, they are patched to become the equivalent of break. Alternatively, these instructions also enter the debugger when we are single-stepping through some code.
These instructions are added by default by the compiler, but they tend to slow the interpreter. If you wish to dispense with debugging information, you may issue the directive:
which turns off generation of debugging information (of code points). If you wish to reinclude debugging information, type:
These directives are seen as declarations by the compiler, just like val or type declarations. As such, they obey the same scope rules. It is recommended to use them in a properly scoped fashion, either inside a let or local expression, or confined in a module.
The way that the interpreter records profiling information is by means of special instructions that do the tallying.
These instructions are not added by default by the compiler, since they tend to slow the interpreter by roughly a factor of 2, and you may not wish to gather profiling information of every piece of code you write. To use the profiler, you first have to issue the directive:
which turns generation of profiling instructions on. The functions that will be profiled are exactly those that were declared with the fun or the memofun keyword.
If you wish to turn it off again, type:
These directives are seen as declarations by the compiler, just like val or type declarations. As such, they obey the same scope rules. It is recommended to use them in a properly scoped fashion, either inside a let or local expression, or confined in a module. Usually, you will want to profile a collection of modules. It is then advised to add (⋆P+⋆) at the beginning of each. Time spent in non-profiled functions will be taken into account as though it had been spent in their profiled callers.
Then, the HimML system provides the following functions to help manage profiling data:
returns the set of all profiling data that the interpreter has accumulated until now on all profiled functions. This is a dump of all internal profiling structures of the interpreter.
The location field describes where the function that is profiled is located. Its first component is the function name, its second component is the file name where this function was defined (or the empty string "" if this function was defined at the toplevel prompt), its third and fourth components are respectively the starting and ending positions of the definition in this file, as line/column pairs. Note that the function name alone is not enough to denote accurately which function is intended, as you can build anonymous functions (by fn, for example): it was chosen to let these functions inherit the name of the function in which they are textually enclosed. The file name and positions in the file are then intended to give a more precise description of what function it is that is described.
The ncalls shows how many times this function was called.
The proper and total fields contain statistics in the same format: time is the time spent in the function (in the format returned by times, i.e. user time and system time), ngcs is the number of garbage collections that were done while executing the function (this gives a rough idea of the memory consumption of the function), and gctime is the time spent garbage collecting in this function. While the statistics in proper only include information of what happened when the interpreter was really executing the function, total also includes the times spent executing all its callees.
report_profiles only reports statistics for those functions that were called at least once (or at least once since the last call to reset_profiles.)
report_profiles is pretty low-level, and is intended to be used as a basic block for more useful report generators. One such generator is located in "Utils/profile.ml". To get a meaningful report, execute your program, then type:
to get a report on your console, or:
to get a report in a file named "prof.out" in the current directory. (To open the module "profile" on a Macintosh, write open "Utils:profile"; in general, it’s better to modify the path to include the Utils directory, and not bother with directory names.)
What can you do with profile information? The main goal is to detect what takes up too much time in your code, so as to focus your efforts of optimization on what really needs it. A good strategy to do this is the following:
The main goal of the HimML module system is to implement separate compilation, where you can build your program as a collection of modules that you can compile independently from each other, and then link them together.
The HimML module system was designed so that it integrated well with the rest of the core language, while remaining simple and intuitive. At the time being, the HimML module system does not provide the other feature that modules are useful for, namely management of name spaces. The module system of Standard ML seems best for this purpose, although it is much more complex than the HimML module system.
Consider the following example. Assume that your program consists naturally of three files, a.ml, b.ml and c.ml. The most natural way of compiling it would be to type:
But, b.ml will probably use some types and values that were defined in a.ml, and similarly c.ml will probably use some types and values defined in a.ml or b.ml. In particular, if you want to modify a definition in a.ml, you will have to reload b.ml and c.ml to be sure that everything has been updated.
This is not dramatic when you have a few files, and provided they are not too long. But if they are long or many, this will take a lot of time. Separate compilation is the cure: with it, you can compile a.ml, b.ml, and c.ml separately, without having to reload other files first.
The paradigm that has been implemented in HimML is close to that used in CaML, and even closer in spirit to the C language. In particular, modules are just source files, as in C. Two new keywords are added to HimML: extern and open. Note that the Standard ML module system also has an open keyword, but there is no ambiguity as it is followed by a structure identifier like Foo in Standard ML, and by a module name like "foo" in HimML.
The extern keyword specifies some type or some value that we need to compile the current file, telling the type-checker and compiler that it is defined in some other file. Otherwise, if you say, for example, val y=x+1 in b.ml, but that x is defined in a.ml, the type-checker would complain that x is undefined when compiling b.ml. To alleviate this, just precede the declaration for y by:
This tells the compiler that x has to be defined in some other file, and that it will know its values only when linking all files together. This is called importing the value of x from another module.
Not only values, but datatypes can be imported:
imports a datatype foo. The compiler will then know that some other module defines a datatype (or an abstype) of this name. However, it won’t know whether this datatype admits equality, i.e. whether you can compare objects of this datatype by =. If you wish to import foo as an equality-admitting datatype, then you should write:
Of course, if foo is a parameterized datatype, you have to declare it with its arity, for example:
for a unary (not necessarily equality-preserving) datatype, or
for an equality-preserving datatype with two type parameters.
Finally, dimensions can be imported as well:
imports foo as a dimension (type of a physical quantity, typically).
Given this, what does the following mean? We write a file "foo.ml", containing:
Then this defines a module that expects to import a value named x, of type int (alternatively, to take x as input), and will then define a new value y as x+1 and export it.
Try the following at the toplevel (be sure to place file "foo.ml" above somewhere on the load path, as referenced by the variable usepath):
You should then see something like:
Opening "foo" by the open declaration above proceeded along the following steps:
In fact, open will recompile .mlx files from the corresponding .ml files whenever one of the .ml files on which it depends has been updated, so as to maintain consistency between the textual versions of the modules (in .ml files, usually) and their precompiled versions (the .mlx files). On the other hand, if an up-to-date .mlx file is present, it won’t recompile it, and will proceed directly to the next step.
A variant on open is open⋆, which does just the same, except it does not try to recompile the source file "foo.ml": it just assumes that "foo.mlx" is up to date, or fails. This is useful when shipping compiled bytecode modules, and is used internally in the himmlpack and himmllnk tools.
Assume now that we didn’t have any value x handy; then open would still have precompiled and opened the resulting object module "foo.mlx". Only, it would have failed to link it to the rest of the system. If you wish to just compile "foo.ml" without loading it and linking it, issue the directive:
at the toplevel. (The # sign must be at the start of the line.) This compiles, or re-compiles, "foo.ml" and writes the result to "foo.mlx".
Another problem pertaining to separate compilation is how to share information between separate modules. For example, you might want to define again three modules a.ml, b.ml and c.ml, where a.ml would define some value f (say, a function from string to int), and b.ml and c.ml would use it.
A first way to do this would be to write:
but this approach suffers from several defects. First, no check is done that the type of f is the same in all three files; in fact, the check will eventually be performed at link time, that is, when doing:
but we had rather be warned when first precompiling the modules.
Then, whenever the type of f changes in a.ml, we would have to change the extern declarations in all other files, which can be tedious and error-prone.
The idea is then to do as in the C language, namely to use one header file common to all three modules. (This approach still has one defect, and we shall see later one how we should really do.) That is, we would define an auxiliary file "a_h.ml" (although the name is not meaningful, the convention in HimML is to add _h to a module name to get the name of a corresponding header file), which would contain only extern declarations. This file, which contains in our case:
is then called a header file.
We then write the files above as:
This way, there is only one place where we have to change the type of f in case we wish to do it: the header file a_h.ml.
What is the meaning of using a_h.ml in a.ml, then? Well, this is the way that type checks are effected across modules. The meaning of extern then changes: in a.ml, f is defined after having been declared extern in a_h.ml, so that f is understood by HimML not as being imported, rather as being exported to other modules. This allows HimML to type-check the definition of f against its extern declaration, and at the same time to resolve the imported symbol f as the definition in a.ml. This is more or less the way it is done in C.
On thing that still does not work with this scheme, however, is how we can share datatypes. This is because datatype declarations are generative. Try the following. In a_h.ml, declare a new datatype:
In a.ml, define the datatype and the value x:
Now in b.ml, write:
Then, open "a", then "b". This does not work: why? The reason is that the definition of the datatype foo in a_h.ml is read twice, once when compiling a.ml, then when compiling b.ml, and that both definitions created fresh datatypes (which just happen to have the same name foo). These datatypes are distinct, hence in val y = x : foo, x has the old foo type, whereas the cast to foo is to the new foo type.
The remedy is to avoid useing header files, and to rather open them. So write the following in a.ml:
and in b.ml:
Opening a_h produces a compiled module a_h.mlx, which holds the definition for foo and the declaration for x. In the compiled module, the datatype declaration for foo is precompiled, so that opening a_h does not re-generate a new datatype foo each time a_h is opened, rather it re-imports the same.
Technically, imagine that fresh datatypes are produced by pairing their name foo with a counter, so that each time we type datatype foo = FOO of int at the toplevel, we generate a type (foo,1), then (foo,2), and so on. This process is slightly changed when compiling modules, and the datatype name is paired with the name of the module instead, say, (foo,a_h). Opening a_h twice then reimports the same datatype.
The same works for exceptions, except there is no extern exception declaration. The reason is just that it would do exactly the same as what exception already does in a module. If you declare:
in a_h.ml, and import a_h as above, by writing open "a_h" in a.ml and b.ml, then both a.ml and b.ml will be able to share the exception Bar. Typing the following in a_h.ml would not work satisfactorily, since Bar would not be recognized as a constructor in patterns:
That is, it would then become impossible to write expressions such as:
in a.ml. However, if you don’t plan to use pattern matching on Bar, then the latter declaration is perfectly all right.
The following commands are available in HimML:
The open keyword can also be used in local declarations, e.g.:
is allowed, and links the module locally. That is, assuming that foo.mlx imports y and exports x = 2⋆y, then the above would return 2 * 3 + 1, namely 7.
It is easier to compile modules by typing the following under the shell:
which does exactly the same as launching HimML, and typing #compile "foo"; quit 0; under the HimML toplevel.
You can then use himml as a HimML standalone compiler, and compile each of your modules with himml -c. This is especially useful when using the make utility. A typical makefile would then look like:
The first lines define a rule how to make compiled HimML modules from source files ending in .ml. It has a syntax specific to GNU make. If your make utility does not support it, replace it by:
The last lines of the above makefile represent dependencies: that a.mlx depends on a.ml and a_h.mlx means that make should rebuild a.mlx (from a.ml, then) whenever it is older than a.ml or a_h.mlx. Such dependencies can be found automatically by the himmldep utility. For example, the dependency line for a.mlx was obtained by typing:
at the shell prompt.
There is no specific way to link compiled modules together, since open already does a link phase. To link a.mlx and b.mlx, write a new module, say pack.ml, containing:
then compile pack.ml. The resulting pack.mlx file can also be executed, provided it has no pending imported identifiers, either by launching HimML, opening pack, and running main (); (provided pack.ml exports one such function), but it is even easier to type the following from the shell:
Under Unix, every module starts with the line:
assuming that /usr/local/bin is the directory where himmlrun was installed, so that you can even make pack.mlx have an executable status:
and then run it as though it were a proper executable file:
This will launch himmlrun on module pack.mlx, find a function main and run it.
Any ASCII text editor can be used to write HimML sources. But an editor can also be used as an environment for HimML. In GNU Emacs, there is a special mode for Standard ML, called ‘sml-mode.el’ and that comes with the Standard ML of New Jersey distribution, that can be adapted to deal with HimML: this is the ‘ml-mode.el’ file. However, it was felt that it did not indent properly in all cases, because of the complicated nature of the ML syntax. A replacement version is in the works, called ‘himml-mode.el’; it is not yet operational.
Remember: a feature is nothing but a documented bug! You may therefore consider the following as features :-).
This is normal. The installation procedure needs to make configuration files, for interpreting your favorite options (in file OPTIONS) or for determining system or compiler behaviours. So, just do as indicated.
Then leave it alone. Most options have reasonable default values.
See next question.
Some operating systems (mostly BSD systems, although the only example I know is AIX) implement a “smart” longjmp() routine that first checks whether the current stack pointer is lower than the one it is trying to restore, and aborts if this is not the case. HimML needs to be able to do just that, in order to implement continuations (and continuations are heavily used internally, even if you don’t plan to use them). The best solution I’ve come up with on AIX is to write a small patching utility (dpxljhak) that hunts for a specific piece of code in the prologue of the longjmp() function and puts no-ops instead. A better solution would be to rewrite the function in assembler, but I’ve been unable to do this.
If this happens to you, try to rewrite longjmp() so that it does not check for stack levels and link your new definition. Or write a patch, just like me; you’ll need to experiment a bit.
Please also contribute your modification so that I can include it in the next HimML release. (See MAINTENANCE at the end of the OPTIONS file to know whom to write to.)
Cray machines have a weird stack format, and my scheme for capturing continuations has no hope of working on these machines. If it’s absolutely necessary for you, I’ll see what I can do, provided you promise to tell me whether it works or not. (See MAINTENANCE at the end of the OPTIONS file to know my address.)
I don’t have any VMS machine handy, so I cannot test HimML on it. The HimML implementation is pretty much centered around Unix, so I would be surprised if it worked without changes. Please tell me what you have been forced to do to make it work.
PC-Dos machines won’t do. 640K is not enough for HimML, and HimML has no knowledge of extended or expanded memory. HimML must run in one segment only, lest its sharing mechanism be defeated by one physical address having two distinct representations (from two different segments). This may work on 486’s or higher, which can use large segments, but the operating system (Dos or Windows, any version until now) is the stumbling block. Your best bet is to change for Linux or any other Unix for PCs. Windows/NT or OS/2 is expected not to pose any problem.
Check the OPTIONS file: there is no safeguard against illegal values there (in particular stack values). Put back the default values; if this does not work, try to increase the stack parameters (notably SAFETY_SIZE and SECURITY). See also previous questions; it is quite likely that this is due to stack problems. If nothing works, mail me (goubault@lsv.ens-cachan.fr, see MAINTENANCE at the end of the OPTIONS file).
Most probably, you have not terminated your command line with a semicolon (;). Although the syntax of Standard ML makes semicolons optional between declarations, the toplevel parser has no way of knowing that input is complete unless it finds a terminating semicolon (or an end of file). Consider also all the ways to complete input such as, say, 1: if you write a semicolon afterwards, then this is an abbreviation of val it=1;, but if you write +2;, even on the following line, then you really meant val it=1+2;, and if you type return just after 1, the parser has no way to know which possibility you intended.
It may happen that typing a semicolon does not cure the problem. This may happen is you have not closed all parentheses and brackets. Consider (frozzle (): if you type a semicolon afterwards, then your input is still incomplete, as you may want to write, say, (frozzle (); foo). The semicolon is not only a declaration separator, but also the sequence instruction.
First, check that you are not defining or declaring datatypes (or dimensions) in header files that you use instead of opening. Each time you use a given file, it creates new versions of the datatypes or dimensions inside it. To avoid it, open the file instead; this creates unique stamps for the datatype (or dimension), which it records in a file of the same name, with .mlx at the end. This will work only if your header file can be compiled separately, so be prepared to modularize your code.
If the above does not apply, it may happen that your .ml files have inconsistent modification dates. The module system always tries to recompile a .ml file when the .ml file appears to be newer than the corresponding .mlx file. Therefore, if the last modification date of the .ml file is some future date, it will always recompile it, as many times as it is opened; and this leads to the same problem as above. A quick fix is to set the modification date manually (with touch on Unix, or setdate on Amigas; there’s probably a public-domain utility to fix this on Macintoshes, but I don’t know). In any case, there’s probably something wrong with the way the date is set up on your system, and it’s worth having a look at it.
This is an alpha revision of HimML. This means that I do not consider it as a distributable version. This means that I deem the product robust enough to be given only to my friends, counting on their comprehensive support, mostly as far as bugs are concerned. This also means that I want some feedback on the usability of the language, and on reasonable ways to improve the implementation.
To help me improve the implementation (and possibly the language, though I am not eager to), you can submit a note to the person in charge of maintaining the system (type #maintenance features at the toplevel to know who, where and when). The preferred communication means is electronic mail, but others (snail-mail notably) are welcome. If you think you have found a bug in HimML, or if you want something changed in HimML, you should send the person in charge a message that should contain:
In case of a bug, the preferred description is in form of a short piece of code, together with the symptoms, and the kind of machine and operating system you are working on. It should be possible for somebody else than you to replay the bug. If you don’t find any small code that would exhibit the same buggy behaviour as the one you’ve just experienced, send the contents of the HimML.trace file: every time you use HimML, it logs every single toplevel or file input in this file, so as to ease replaying your actions. This may not always work, but it can help. (This file may have another name, if you have chosen to use the -replay-file command-line option.)
In case of a suggestion, please refrain from submitting your idea of what would be a cute extension of the language. Suggestions should improve the level of comfort you can have from using the implementation, and should be implementable without destroying the spirit of HimML. If you want to propose a suggestion, definitely argue that it will be needed, and the maintainer will try and see if it is doable.