Friday, November 3, 2017

Creating custom object transformations with NiJS and PNDP

In a number earlier blog posts, I have described two kinds of internal DSLs for Nix -- NiJS is a JavaScript-based internal DSL and PNDP is a PHP-based internal DSL.

These internal DSLs have a variety of application areas. Most of them are simply just experiments, but the most serious application area is code generation.

Using an internal DSL for generation has a number of advantages over string generation that is more commonly used. For example, when composing strings containing Nix expressions, we must make sure that any variable in the host language that we append to a generated expression is properly escaped to prevent code injection attacks.

Furthermore, we also have to take care of the indentation if we want to output Nix expression code that should be readable. Finally, string manipulation itself is not a very intuitive activity as it makes it very hard to read what the generated code would look like.

Translating host language objects to the Nix expression language


A very important feature of both internal DSLs is that they can literally translate some language constructs from the host language (JavaScript or PHP) to the Nix expression because they have (nearly) an identical meaning. For example, the following JavaScript code fragment:

var nijs = require('nijs');

var expr = {
  hello: "Hello",
  name: {
    firstName: "Sander",
    lastName: "van der Burg"
  },
  numbers: [ 1, 2, 3, 4, 5 ]
};

var output = nijs.jsToNix(expr, true);
console.log(output);

will output the following Nix expression:

{
  hello = "Hello",
  name = {
    firstName = "Sander";
    lastName = "van der Burg";
  };
  numbers = [
    1
    2
    3
    4
    5
  ];
}

In the above example, strings will be translated to strings (and quotes will be escaped if necessary), objects to attribute sets, and the array of numbers to a list of numbers. Furthermore, the generated code is also pretty printed so that attribute set and list members have 2 spaces of indentation.

Similarly, in PHP we can compose the following code fragment to get an identical Nix output:

use PNDP\NixGenerator;

$expr = array(
  "hello" => "Hello",
  "name" => array(
    "firstName" => "Sander",
    "lastName => "van der Burg"
  ),
  "numbers" => array(1, 2, 3, 4, 5)
);

$output = NixGenerator::phpToNix($expr, true);
echo($output);

The PHP generator uses a number of clever tricks to determine whether an array is associative or sequential -- the former gets translated into a Nix attribute set while the latter gets translated into a list.

There are objects in the Nix expression language for which no equivalent exists in the host language. For example, Nix also allows you to define objects of a 'URL' and 'file' type. Neither JavaScript nor PHP have a direct equivalent. Moreover, it may be desired to generate other kinds of language constructs, such as function declarations and function invocations.

To still generate these kinds of objects, you must compose an abstract syntax tree from objects that inherit from the NixObject prototype or class. For example, we can define a function invocation to fetchurl {} in Nixpkgs as follows in JavaScript:

var expr = new nijs.NixFunInvocation({
    funExpr: new nijs.NixExpression("fetchurl"),
    paramExpr: {
        url: new nijs.NixURL("mirror://gnu/hello/hello-2.10.tar.gz"),
        sha256: "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"
    }
});

and in PHP as follows:

use PNDP\AST\NixExpression;
use PNDP\AST\NixFunInvocation;
use PNDP\AST\NixURL;

$expr = new NixFunInvocation(new NixExpression("fetchurl"), array(
    "url" => new NixURL("mirror://gnu/hello/hello-2.10.tar.gz"),
    "sha256" => "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"
));

Both of the objects in the above code fragments translate to the following Nix expression:

fetchurl {
  url = mirror://gnu/hello/hello-2.10.tar.gz;
  sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
}

Transforming custom object structures into Nix expressions


The earlier described use cases are basically one-on-one translations from the host language (JavaScript or PHP) to the guest language (Nix). In some cases, literal translations do not make sense -- for example, it may be possible that we already have an application with an existing data model from which we want to derive deployments that should be carried out with Nix.

In the latest versions of NiJS and PNDP, it is also possible to specify how to transform custom object structures into a Nix expression. This can be done by inheriting from the NixASTNode class or prototype and overriding the toNixAST() method.

For example, we may have a system already providing a representation of a file that should be downloaded from an external source:

function HelloSourceModel() {
    this.src = "mirror://gnu/hello/hello-2.10.tar.gz";
    this.sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
}

The above module defines a constructor function composing an object that refers to the GNU Hello package provided by a GNU mirror site.

A direct translation of an object constructed by the above function to the Nix expression language does not provide anything meaningful -- it can, for example, not be used to let Nix fetch the package from the mirror site.

We can inherit from NixASTNode and implement our own custom toNixAST() function to provide a more meaningful Nix translation:

var nijs = require('nijs');
var inherit = require('nijs/lib/ast/util/inherit.js').inherit;

/* HelloSourceModel inherits from NixASTNode */
inherit(nijs.NixASTNode, HelloSourceModel);

/**
 * @see NixASTNode#toNixAST
 */
HelloSourceModel.prototype.toNixAST = function() {
    return this.args.fetchurl()({
        url: new nijs.NixURL(this.src),
        sha256: this.sha256
    });
};

The toNixAST() function shown above composes an abstract syntax tree (AST) for a function invocation to fetchurl {} in the Nix expression language with the url and sha256 properties a parameters.

An object that inherits from the NixASTNode prototype also indirectly inherits from NixObject. This means that we can directly attach such an object to any other AST object. The generator uses the underlying toNixAST() function to automatically convert it to its AST representation:

var helloSource = new HelloSourceModel();
var output = nijs.jsToNix(helloSource, true);
console.log(output);

In the above code fragment, we directly pass the construct HelloSourceModel object instance to the generator. The output will be the following Nix expression:

fetchurl {
  url = mirror://gnu/hello/hello-2.10.tar.gz;
  sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
}

In some cases, it may not be possible to inherit from NixASTNode, for example, when the object already inherits from another prototype or class that is beyond the user's control.

It is also possible to use the NixASTNode constructor function as an adapter. For example, we can take any object with a toNixAST() function:

var helloSourceWrapper = {
    toNixAST: function() {
        return new nijs.NixFunInvocation({
            funExpr: new nijs.NixExpression("fetchurl"),
            paramExpr: {
                url: new nijs.NixURL(this.src),
                sha256: this.sha256
            }
        });
    }
};

By wrapping the helloSourceWrapper object in the NixASTNode constructor, we can convert it to an object that is an instance of NixASTNode:

new nijs.NixASTNode(helloSourceWrapper)

In PHP, we can change any class into a NixASTNode by implementing the NixASTConvertable interface:

use PNDP\AST\NixASTConvertable;
use PNDP\AST\NixURL;

class HelloSourceModel implements NixASTConvertable
{
    /**
     * @see NixASTConvertable::toNixAST()
     */
    public function toNixAST()
    {
        return $this->args->fetchurl(array(
            "url" => new NixURL($this->src),
            "sha256" => $this->sha256
        ));
    }
}

By passing an object that implements the NixASTConvertable interface to the NixASTNode constructor, it can be converted:

new NixASTNode(new HelloSourceModel())

Motivating use case: the node2nix and composer2nix generators


My main motivation to use custom transformations is to improve the quality of the node2nix and composer2nix generators -- the former converts NPM package configurations to Nix expressions and the latter converts PHP composer package configurations to Nix expressions.

Although NiJS and PNDP provide a number of powerful properties to improve the code generation steps of these tools, e.g. I no longer have to think much about escaping strings or pretty printing, there are still many organizational coding issues left. For example, the code that parses the configurations, fetches the external sources, and generates the code are mixed. As a consequence, the code is very hard to read, update, maintain and to ensure its correctness.

The new transformation facilities allow me to separate concerns much better. For example, both generators now have a data model that reflects the NPM and composer problem domain. For example, I could compose the following (simplified) class diagram for node2nix's problem domain:


A crucial part of node2nix's generator is the package class shown on the top left on the diagram. A package requires zero or more packages as dependencies and may provide zero or more packages in the node_modules/ folder residing in the package's base directory.

For readers not familiar with NPM's dependency management: every package can install its dependencies privately in a node_modules/ folder residing in the same base directory. The CommonJS module ensures that every file is considered to be a unique module that should not interfere with other modules. Sharing is accomplished by putting a dependency in a node_modules/ folder of an enclosing parent package.

NPM 2.x always installs a package dependency privately unless a parent package exists that can provide a conforming version. NPM 3.x (and later) will also move a package into the node_modules/ folder hierarchy as high as possible to prevent too many layers of nested node_modules/ folders (this is particularly a problem on Windows). The class structure in the above diagram reflects this kind of dependency organisation.

In addition to a package dependency graph, we also need to obtain package metadata and compute their output hashes. NPM packages originate from various kinds of sources, such as the NPM registry, Git repositories, HTTP sites and local directories on the filesystem.

To optimize the process and support sharing of common sources among packages, we can use a source cache that memorizes all unique source referencess.

The Package::resolveDependencies() method sets the generation process in motion -- it will construct the dependency graph replicating NPM's dependency resolution algorithm as faithfully as possible, and resolves all the dependencies' (and transitive dependencies) metadata.

After resolving all dependencies and their metadata, we must generate the output Nix expressions. One Nix expression is copied (the build infrastructure) and two are generated -- a composition expression and a package or collection expression.

We can also compose a class diagram for the generation infrastructure:


In the above class diagram, every generated expression is represented a class inheriting from NixASTNode. We can also reuse some classes from the domain model as constituents for the generated expressions, by also inheriting from NixASTNode and overriding the toNixAST() method:

  • The source objects can be translated into sub expressions that invoke fetchurl {} and fetchgit {}.
  • The sources cache can be translated into an attribute set exposing all sources that are used as dependencies for packages.
  • A package instance can be converted into a function invocation to nodeenv.buildNodePackage {} that, in addition to configuring build properties, binds the required dependencies to the sources in the sources cache attribute set.

By decomposing the expression into objects and combining the objects' AST representations, we can nicely modularize the generation process.

For composer2nix, we can also compose a class diagram for its domain -- the generation process:


The above class diagram has many similarities, but also some major differences compared to node2nix. composer provides so-called lock files that pinpoint the exact versions of all dependencies and transitive dependencies. As a result, we do not need to replicate composer's dependency resolution algorithm.

Instead, the generation process is driven by the ComposerConfig class that encapsulates the properties of the composer.json and composer.lock files of a package. From a composer configuration, the generator constructs a package object that refers to the package we intend to deploy and populates a source cache with source objects that come from various sources, such as Git, Mercurial and Subversion repositories, Zip files, and directories residing on the local filesystem.

For the generation process, we can adopt a similar strategy that exposes the generated Nix expressions as classes and uses some classes of the domain model as constituents for the generation process:


Discussion


In this blog post, I have described a new feature for the NiJS and PNDP frameworks, making it possible to implement custom transformations. Some of its benefits are that it allows an existing object model to be reused and concerns in an application can be separated much more conveniently.

These facilities are not only useful for the improvement of the architecture of the node2nix and composer2nix generators -- at the company I work for (Conference Compass), we developed our own domain-specific configuration management tool.

Despite the fact that it uses several tools from the Nix project to carry out deployments, it uses a domain model that is not Nix-specific at all. Instead, it uses terminology and an organization that reflects company processes and systems.

For example, we use a backend-for-frontend organization that provides a backend for each mobile application that we ship. We call these backends configurators. Optionally, every configurator can import data from various external sources that we call channels. The tool's data model reflects this kind of organization, and generates Nix expressions that contain all relevant implementation details, if necessary.

Finally, the fact that I modified node2nix to have a much cleaner architecture has another reason beyond quality improvement. Currently, NPM version 5.x (that comes with Node.js 8.x) is still unsupported. To make it work with Nix, we require a slightly different generation process and a completely different builder environment. The new architecture allows me to reuse the common parts much more conveniently. More details about the new NPM support will follow (hopefully) soon.

Availability


I have released new versions of NiJS and PNDP that have the custom transformation facilities included.

Furthermore, I have decided to release new versions for node2nix and composer2nix that use the new generation facilities in their architecture. The improved architecture revealed a very uncommon but nasty bug with bundled dependencies in node2nix, that is now solved.

Tuesday, October 3, 2017

Deploying PHP composer packages with the Nix package manager

In two earlier blog posts, I have described various pieces of my custom web framework that I used to actively develop many years ago. The framework is quite modular -- every concern, such as layout management, data management, the editor, and the gallery, are separated into packages that can be deployed independently, so that web applications only have to include what they actually need.

Although modularity is quite useful for a variety of reasons, the framework did not start out as being modular in the beginning -- when I just started developing web applications in PHP, I did not reuse anything at all. Slowly, I discovered similarities between my projects and started sharing snippets of common functionality between them. Gradually, I learned that keeping these common aspects up to date became a burden. As a result, I developed a "common framework" that I reused among all my PHP projects.

Having a common framework for my web application projects reduced the amount of required maintenance, but introduced a new drawback -- its size kept growing and growing. As a result, many simple web applications that only required a small subset of the framework's functionality still had to embed the entire framework, making them unnecessarily big.

Today, a bit of extra PHP code is not so much of a problem, but around the time I was still actively developing web applications, many shared web hosting providers only offered a small amount of storage capacity, typically just a few megabytes.

To cope with the growing size of the framework, I decided to modularize the code by separating the framework's concerns into packages that can be deployed independently. I "invented" my own conventions to integrate the framework packages into web applications:

  • In the base directory of the web application project, I create a lib/ directory that contains symlinks to the framework packages.
  • In every PHP script that displays a page (typically only index.php), I configure the include path to refer to the packages' content in the lib/ folder, such as:

    set_include_path("./lib/sblayout:./lib/sbdata:./lib/sbcrud");
    

  • Each PHP module is responsible for loading the desired classes or utility functions from the framework packages. As a result, I ended up writing a substantial amount of require() statements, such as:

    require_once("data/model/Form.class.php");
    require_once("data/model/field/HiddenField.class.php");
    require_once("data/model/field/TextField.class.php");
    require_once("data/model/field/DateField.class.php");
    require_once("data/model/field/TextAreaField.class.php");
    require_once("data/model/field/URLField.class.php");
    require_once("data/model/field/FileField.class.php");
    

After my (approximately) 8 years of absence from the PHP domain, I discovered that a tool has been developed to support convenient construction of modular PHP applications: composer. Composer is heavily inspired by the NPM package manager, that is the defacto package delivery mechanism for Node.js applications.

In the last couple of months (it progresses quite slowly as it is a non-urgent side project), I have decided to get rid of my custom modularity conventions in my framework packages, and to adopt composer instead.

Furthermore, composer is a useful deployment tool, but its scope is limited to PHP applications only. As frequent readers may probably already know, I use Nix-based solutions to deploy entire software systems (that are also composed of non-PHP packages) from a single declarative specification.

To be able to include PHP composer packages in a Nix deployment process, I have developed a generator named: composer2nix that can be used to generate Nix deployment expressions from composer configuration files.

In this blog post, I will explain the concepts of composer2nix and show how it can be used.

Using composer


Using composer is generally quite straight forward. In the most common usage scenario, there is typically a PHP project (often a web application) that requires a number of dependencies. By changing the current working folder to the project directory, and running:

$ composer install

Composer will obtain all required dependencies and stores them in the vendor/ sub directory.

The vendor/ folder follows a very specific organisation:

$ find vendor/ -maxdepth 2 -type d
vendor/bin
vendor/composer
vendor/phpdocumentor
vendor/phpdocumentor/fileset
vendor/phpdocumentor/graphviz
vendor/phpdocumentor/reflection-docblock
vendor/phpdocumentor/reflection
vendor/phpdocumentor/phpdocumentor
vendor/svanderburg
vendor/svanderburg/pndp
...

The vendor/ folder structure (mostly) consists two levels: the outer directory defines the namespace of the packages and the inner directory the package names.

There are a couple of folders deviating from this convention -- most notably, the vendor/composer directory, that is used by composer to track package installations:

$ ls vendor/composer
autoload_classmap.php
autoload_files.php
autoload_namespaces.php
autoload_psr4.php
autoload_real.php
autoload_static.php
ClassLoader.php
installed.json
LICENSE

In addition to obtaining packages and storing them in the vendor/ folder, composer also generates autoload scripts (as shown above) that can be used to automatically make code units (typically classes) provided by the packages available for use in the project. Adding the following statement to one of your project's PHP scripts:

require_once("vendor/autoload.php");

suffices to load the functionality exposed by the packages that composer installs.

Composer can be used to install both runtime and development dependencies. Many development dependencies (such as phpunit or phpdocumentor) provide command-line utilities to carry out tasks. Composer packages can also declare which executables they provide. Composer automatically generates symlinks for all provided executables in the: vendor/bin folder:

$ ls -l vendor/bin/
lrwxrwxrwx 1 sander users 29 Sep 26 11:49 jsonlint -> ../seld/jsonlint/bin/jsonlint
lrwxrwxrwx 1 sander users 41 Sep 26 11:49 phpdoc -> ../phpdocumentor/phpdocumentor/bin/phpdoc
lrwxrwxrwx 1 sander users 45 Sep 26 11:49 phpdoc.php -> ../phpdocumentor/phpdocumentor/bin/phpdoc.php
lrwxrwxrwx 1 sander users 34 Sep 26 11:49 pndp-build -> ../svanderburg/pndp/bin/pndp-build
lrwxrwxrwx 1 sander users 46 Sep 26 11:49 validate-json -> ../justinrainbow/json-schema/bin/validate-json

For example, you can run the following command-line instruction from the base directory of a project to generate API documentation:

$ vendor/bin/phpdocumentor -d src -t out

In some cases (the composer documentation often discourages this) you may want to install end-user packages globally. They can be installed into the global composer configuration directory by running:

$ composer global require phpunit/phpunit

After installing a package globally (and adding: $HOME/.config/composer/vendor/bin directory to the PATH environment variable), we should be able to run:

$ phpunit --help

The composer configuration


The deployment operations that composer carries out are driven by a configuration file named: composer.json. An example of such a configuration file could be:

{
  "name": "svanderburg/composer2nix",
  "description": "Generate Nix expressions to build PHP composer packages",
  "type": "library",
  "license": "MIT",
  "authors": [
      {
          "name": "Sander van der Burg",
          "email": "svanderburg@gmail.com",
          "homepage": "http://sandervanderburg.nl"
      }
  ],

  "require": {
      "svanderburg/pndp": "0.0.1"
  },
  "require-dev": {
      "phpdocumentor/phpdocumentor": "2.9.x"
  },

  "autoload": {
      "psr-4": { "Composer2Nix\\": "src/Composer2Nix" }
  },

  "bin": [ "bin/composer2nix" ]
}

The above configuration file declares the following configuration properties:

  • A number of meta attributes, such as the package name, description, license and authors.
  • The package type. The type: library indicates that this project is a library that can be used in another project.
  • The project's runtime (require) and development (require-dev) dependencies. In a dependency object, the keys refer to the package names and the values to version specifications that can be either:
    • A semver compatible version specifier that can be an exact version (e.g. 0.0.1), wildcard (e.g. 1.0.x), or version range (e.g. >= 1.0.0).
    • A version alias that directly (or indirectly) resolves to a branch in the VCS repository of the dependency. For example, the dev-master version specifier refers to the current master branch of the Git repository of the package.
  • The autoloader configuration. In the above example, we configure the autoloader to load all classes belonging to the Composer2Nix namespace, from the src/Composer2Nix sub directory.

By default, composer obtains all packages from the Packagist repository. However, it is also possible to consult other kinds of repositories, such as external HTTP sites or VCS repositories of various kinds (including Git, Mercurial and Subversion).

External repositories can be specified by adding a 'repositories' object to the composer configuration:

{
  "name": "svanderburg/composer2nix",
  "description": "Generate Nix expressions to build PHP composer packages",
  "type": "library",
  "license": "MIT",
  "authors": [
      {
          "name": "Sander van der Burg",
          "email": "svanderburg@gmail.com",
          "homepage": "http://sandervanderburg.nl"
      }
  ],
  "repositories": [
      {
          "type": "vcs",
          "url": "https://github.com/svanderburg/pndp"
      }
  ],

  "require": {
      "svanderburg/pndp": "dev-master"
  },
  "require-dev": {
      "phpdocumentor/phpdocumentor": "2.9.x"
  },

  "autoload": {
      "psr-4": { "Composer2Nix\\": "src/Composer2Nix" }
  },

  "bin": [ "bin/composer2nix" ]
}

In the above example, we have defined PNDP's GitHub repository as an external repository and changed the version specifier of svanderburg/pndp to use the latest Git master branch.

Composer uses a version resolution strategy that will parse composer configuration files and branch names in all repositories to figure out where a version can be obtained from and takes the first option that matches the dependency specification. Packagist is consulted last, making it possible for the user to override dependencies.

Pinpointing dependency versions


The version specifiers of dependencies in a composer.json configuration file are nominal and have some drawbacks when it comes to reproducibility -- for example, the version specifier: >= 1.0.1 may resolve to version 1.0.2 today and to 1.0.3 tomorrow, making it very difficult to exactly reproduce a deployment elsewhere at a later point in time.

Although direct dependencies can be easily controlled by the user, it is quite difficult to control the version resolutions of the transitive dependencies. To cope with this problem, composer will always generate lock files (composer.lock) that pinpoint the exact dependency versions (including all transitive dependencies) the first time when it gets invoked (or when composer update is called):

{
    "_readme": [
        "This file locks the dependencies of your project to a known state",
        "Read more about it at https://getcomposer.org/doc/01-basic-usage.md#composer-lock-the-lock-file",
        "This file is @generated automatically"
    ],
    "content-hash": "ca5ed9191c272685068c66b76ed1bae8",
    "packages": [
        {
            "name": "svanderburg/pndp",
            "version": "v0.0.1",
            "source": {
                "type": "git",
                "url": "https://github.com/svanderburg/pndp.git",
                "reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da"
            },
            "dist": {
                "type": "zip",
                "url": "https://api.github.com/repos/svanderburg/pndp/zipball/99b0904e0f2efb35b8f012892912e0d171e9c2da",
                "reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da",
                "shasum": ""
            },
            "bin": [
                "bin/pndp-build"
            ],
            ...
        }
        ...
    ]
}

By bundling the composer.lock file with the package, it becomes possible to reproduce a deployment elsewhere with the exact same package versions.

The Nix package manager


Nix is a package manager whose main purpose is to build all kinds of software packages from source code, such as GNU Autotools, CMake, Perl's MakeMaker, Apache Ant, and Python projects.

Nix's main purpose is not be a build tool (it can actually also be used for building projects, but this application area is still highly experimental). Instead, Nix manages dependencies and complements existing build tools by providing dedicated build environments to make deployments reliable and reproducible, such as clearing all environment variables, making files read-only after the package has been built, restricting network access and resetting the files' timestamps to 1.

Most importantly, in these dedicated environments Nix ensures that only specified dependencies can be found. This may probably sound inconvenient at first, but this property exists for a good reason: if a package unknowingly depends on another package then it may work on the machine where it has been built, but may fail on another machine because this unknown dependency is missing. By building a package in a pure environment in which all dependencies are known, we eliminate this problem.

To provide stricter purity guarantees, Nix isolates packages by storing them in a so-called "Nix store" (that typically resides in: /nix/store) in which every directory entry corresponds to a package. Every path in the Nix store is prefixed by hash code, such as:

/nix/store/2gi1ghzlmb1fjpqqfb4hyh543kzhhgpi-firefox-52.0.1

The hash is derived from all build-time dependencies to build the package.

Because every package is stored in its own path and variants of packages never share the same name because of the hash prefix, it becomes harder for builds to accidentally succeed because of undeclared dependencies. Dependencies can only be found if the environment has been configured in such a way that the Nix store paths to the packages are known, for example, by configuring environment variables, such as: export PATH=/nix/store/5vyssyqvbirdihqrpqhbkq138ax64bjy-gnumake-4.2.1/bin.

The Nix expression language and build environment abstractions have all kinds of facilities to make the configuration of dependencies convenient.

Integrating composer deployments into Nix builder environments


Invoking composer in a Nix builder environment introduces an additional challenge -- composer is not only a tool that does build management (e.g. it can execute script directives that can carry out arbitrary build steps), but also dependency management. The latter property conflicts with the Nix package manager.

In a Nix builder environment, network access is typically restricted, because it affects reproducibility (although it still possible to hack around this restriction) -- when downloading a file from an external site it is not known in advance what you will get. An unknown artifact influences the outcome of a package build in unpredictable ways.

Network access in Nix build environments is only permitted in so-called fixed output derivations. For a fixed output derivation, the output hash must be known in advance so that Nix can verify whether we have obtained the artifact we want.

The solution to cope with a conflicting dependency manager is by substituting it -- we must let Nix obtain the dependencies and force the tool to only execute its build management tasks.

We can populate the vendor/ folder ourselves. As explained earlier, the composer.lock file stores the exact versions of pinpointed dependencies including all transitive dependencies. For example, when a project declares svanderburg/pndp version 0.0.1 as a dependency, it may translate to the following entry in the composer.lock file:

"packages": [
    {
        "name": "svanderburg/pndp",
        "version": "v0.0.1",
        "source": {
            "type": "git",
            "url": "https://github.com/svanderburg/pndp.git",
            "reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da"
        },
        "dist": {
            "type": "zip",
            "url": "https://api.github.com/repos/svanderburg/pndp/zipball/99b0904e0f2efb35b8f012892912e0d171e9c2da",
            "reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da",
            "shasum": ""
        },
        ...
    }
    ...
]

As can be seen in the code fragment above, the dependency translates to two kinds of pinpointed source objects -- a source reference to a specific revision in a Git repository and a dist reference to a zipball containing a snapshot of the given Git revision.

The reason why every dependency translates to two kinds of objects is that composer supports two kinds of installation modes: source (to obtain a dependency directly from a VCS) and dist (to obtain a dependency from a zipball).

We can translate the 'dist' reference into the following Nix function invocation:

"svanderburg/pndp" = {
  targetDir = "";
  src = composerEnv.buildZipPackage {
    name = "svanderburg-pndp-99b0904e0f2efb35b8f012892912e0d171e9c2da";
    src = fetchurl {
      url = https://api.github.com/repos/svanderburg/pndp/zipball/99b0904e0f2efb35b8f012892912e0d171e9c2da;
      sha256 = "19l7i7adp76bjf32x9a2ykm0r5cgcmi4wf4cm4127miy3yhs0n4y";
    };
  };
};

and the 'source' reference to the following Nix function invocation:

"svanderburg/pndp" = {
  targetDir = "";
  src = fetchgit {
    name = "svanderburg-pndp-99b0904e0f2efb35b8f012892912e0d171e9c2da";
    url = "https://github.com/svanderburg/pndp.git";
    rev = "99b0904e0f2efb35b8f012892912e0d171e9c2da";
    sha256 = "15i311dc0123v3ppa69f49ssnlyzizaafzxxr50crdfrm8g6i4kh";
  };
};

(As a sidenote: we need the targetDir property to provide compatibility with the deprecated PSR-0 autoloading standard. Old autoload packages can be stored in a sub folder of a package residing in the vendor/ structure.)

To generate the above function invocations, we need more than just the properties provided by the composer.lock file. Since download functions in Nix are fixed output derivations, we must compute the output hashes of the downloads by invoking a Nix prefetch script, such as nix-prefetch-url or nix-prefetch-git. The composer2nix generator will automatically invoke the appropriate prefetch script to augment the generated expressions with output hashes.

To ensure maximum compatibility with composer's behaviour, the dependencies obtained by Nix must be copied into to the vendor/ folder. In theory, symlinking would be more space efficient, but experiments have shown that some packages (such as phpunit) may attempt to load the project's autoload script, e.g. by invoking:

require_once(realpath("../../autoload.php"));

The above require invocation does not work if the dependency is a symlink -- the require path resolves to a path in the Nix store (e.g. /nix/store/...). The parent's parent path corresponds to /nix where no autoload script is stored. (As a sidenote: I have decided to still provide symlinking as an option for deployment scenarios where this is not an issue).

After some experimentation, I discovered that composer uses the following file to track which packages have been installed: vendor/composer/installed.json. The contents appears to be quite similar to the composer.lock file:

[
    {
        "name": "svanderburg/pndp",
        "version": "v0.0.1",
        "version_normalized": "0.0.1.0",
        "source": {
            "type": "git",
            "url": "https://github.com/svanderburg/pndp.git",
            "reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da"
        },
        "dist": {
            "type": "zip",
            "url": "https://api.github.com/repos/svanderburg/pndp/zipball/99b0904e0f2efb35b8f012892912e0d171e9c2da",
            "reference": "99b0904e0f2efb35b8f012892912e0d171e9c2da",
            "shasum": ""
        },
        ...
    },
    ...
]

Reconstructing the above file can be done by merging the contents of the packages and packages-dev objects in the composer.lock file.

Another missing piece in the puzzle is the autoload scripts. We can force composer to dump the autoload script, by running:

$ composer dump-autoload --optimize

The above command generates an optimized autoloader script. A non-optimized autoload script dynamically inspects the contents of the package folders to load modules. This is convenient in the development stage of a project, in which the files continuously change, but in production environments this introduces quite a bit of load time overhead.

Since packages in the Nix store can never change after they have been built, it makes no sense to generate a non-optimized autoloader script.

Finally, the last remaining practical issue, is PHP packages providing command-line utilities. Most executables have the following shebang line:

#!/usr/bin/env php

To ensure that these CLI tools work in Nix builder environments, the above shebang must be subsituted by the PHP executable that resides in the Nix store.

After carrying out the above described steps, running the following command:

$ composer install --optimize-autoloader

is simply just a formality -- it will not download or change anything.

Use cases


composer2nix has a variety of use cases. The most obvious one is to use it to package a web application project with Nix instead of composer. Running the following command generates Nix expressions from the composer configuration files:

$ composer2nix

By running the following command, we can use Nix to obtain the dependencies and generate a package with a vendor/ folder:

$ nix-build
$ ls result/
index.php  vendor/

In addition to web applications, we can also deploy command-line utility projects implemented in PHP. For these kinds of projects it make more sense generate a bin/ sub folder in which the executables can be found.

For example, for the composer2nix project, we can generate a CLI-specific expression by adding the --executable parameter:

$ composer2nix --executable

We can install the composer2nix executable in our Nix profile by running:

$ nix-env -f default.nix -i

and then invoke composer2nix as follows:

$ composer2nix --help

We can also deploy third party command-line utilities directly from the Packagist repository:

$ composer2nix -p phpunit/phpunit
$ nix-env -f default.nix -iA phpunit-phpunit
$ phpunit --version

The most powerful application is not the integration with Nix itself, but the integration with other Nix projects. For example, we can define a NixOS configuration running an Apache HTTP server instance with PHP and our example web application:

{pkgs, config, ...}:

let
  myexampleapp = import /home/sander/myexampleapp {
    inherit pkgs;
  };
in
{
  services.httpd = {
    enable = true;
    adminAddr = "admin@localhost";
    extraModules = [
      { name = "php7"; path = "${pkgs.php}/modules/libphp7.so"; }
    ];
    documentRoot = myexampleapp;
  };

  ...
}

We can deploy the above NixOS configuration as follows:

$ nixos-rebuild switch

By running only one simple command-line instruction, we have a running system with the Apache webserver serving our web application.

Discussion


In addition to composer2nix, I have also been responsible for developing node2nix, a tool that generates Nix expressions from NPM package configurations. Because composer is heavily inspired by NPM, we see many similarities in the architecture of both generators. For example, both generate the same kinds of expressions (a builder environment, a packages expression and a composition expression), have a similar separation of concerns, and both use an internal DSL for generating Nix expressions (NiJS and PNDP).

There are also a number of conceptual differences -- dependencies in NPM can be private to a package or shared among multiple packages. In composer, all dependencies in a project are shared.

The reason why NPM's dependency management is more powerful is because Node.js uses the CommonJS module system. CommonJS considers each file to be a unique module. This, for example, makes it possible for one module to load a version of a package from a certain filesystem location and another version of the same package from another filesystem location within the same project.

By contrast, in PHP, isolation is accomplished by the namespace declarations in each file. Namespaces can not be dynamically altered so that multiple versions can safely coexist in one project. Furthermore, the vendor/ directory structure makes it possible to store only one variant of a package.

Despite the fact that composer's dependency management is less powerful makes constructing a generator much more straightforward compared to NPM.

Another feature that composer supports for quite some time, and NPM until very recently is pinpointing/locking dependencies. When generating Nix expressions from NPM package configurations, we must replicate NPM's dependency resolving algorithm. In composer, we can simply take whatever the composer.lock file provides. The lock file saves us from replicating the dependency lookup process making the generation process considerably easier.

Acknowledgments


My implementation is not the first attempt that tries to integrate composer with Nix. After a few days of developing, I discovered another attempt that seems to be in a very early development stage. I did not try or use this version.

Availability


composer2nix can be obtained from Packagist and my GitHub page.

Despite the fact that I did quite a bit of research, composer2nix should still be considered a prototype. One of its known limitations is that it does not support fossil repositories yet.

Monday, September 11, 2017

PNDP: An internal DSL for Nix in PHP

It has been a while since I wrote a Nix-related blog post. In many of my earlier Nix blog posts, I have elaborated about various Nix applications and their benefits.

However, when you are developing a product or service, you typically do not only want to use configuration management tools, such as Nix -- you may also want to build a platform that is tailored towards your needs, so that common operations can be executed structurally and conveniently.

When it is desired to integrate custom solutions with Nix-related tools, you basically have one recurring challenge -- you must generate deployment specifications in the Nix expression language.

The most obvious solution is to use string manipulation to generate the expressions we want, but this has a number of disadvantages. Foremost, composing strings is not a very intuitive activity -- it is not always obvious to see what the end result would be by looking at the code.

Furthermore, it is difficult to ensure that a generated expression is correct and safe. For example, if a string value is not properly escaped, it may be possible to inject arbitrary deployment code putting the security of the deployed system at risk.

For these reasons, I have developed NiJS: an internal DSL for JavaScript, a couple of years ago to make integration with JavaScript-based applications more convenient. Most notably, NiJS is used by node2nix to generate Nix expressions from NPM package deployment specifications.

I have been doing PHP development in the last couple of weeks and realized that I needed a similar solution for this language. In this blog post, I will describe PNDP, an internal DSL for Nix in PHP, and show how it can be used.

Composing Nix packages in PHP


The Nix packages repository follows a specific convention for organizing packages -- every package is a Nix expression file containing a function definition describing how to build a package from source code and its build-time dependencies.

A top-level composition expression file provides all the function invocations that build variants of packages (typically only one per package) by providing the desired versions of the build-time dependencies as function parameters.

Every package definition typically invokes stdenv.mkDerivation {} (or abstractions built around it) that composes a dedicated build environment in which only the specified dependencies can be found and other kinds of precautions are taken to improve build reproducibility. In this builder environment, we can execute many kinds of build steps, such as running GNU Make, CMake, or Apache Ant.

In our internal DSL in PHP we can replicate these conventions using PHP language constructs. We can compose a proxy to the stdenv.mkDerivation {} invocation in PHP by writing the following class:

namespace Pkgs;
use PNDP\AST\NixFunInvocation;
use PNDP\AST\NixExpression;

class Stdenv
{
    public function mkDerivation($args)
    {
        return new NixFunInvocation(new NixExpression("pkgs.stdenv.mkDerivation"), $args);
    }
}

In the above code fragment, we define a class named: Stdenv exposing a method named mkDerivation. The method composes an abstract syntax tree for a function invocation to stdenv.mkDerivation {} using an arbitrary PHP object of any type as a parameter.

With the proxy shown above, we can create our own in packages in PHP by providing a function definition that specifies how a package can be built from source code and its build-time dependencies:

namespace Pkgs;
use PNDP\AST\NixURL;

class Hello
{
    public static function composePackage($args)
    {
        return $args->stdenv->mkDerivation(array(
            "name" => "hello-2.10",

            "src" => $args->fetchurl(array(
                "url" => new NixURL("mirror://gnu/hello/hello-2.10.tar.gz"),
                "sha256" => "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"
            )),

            "doCheck" => true,

            "meta" => array(
                "description" => "A program that produces a familiar, friendly greeting",
                "homepage" => new NixURL("http://www.gnu.org/software/hello/manual"),
                "license" => "GPLv3+"
            )
        ));
    }
}

The above code fragment defines a class named 'Hello' exposing one static method named: composePackage(). The composePackage method invokes the stdenv.mkDerivation {} proxy (shown earlier) to build GNU Hello from source code.

In addition to constructing a package, the above code fragment also follows the PHP conventions for modularization -- in PHP it is a common practice to modularize code chunks into classes that reside in their own namespace. For example, by following these conventions, we can also automatically load our package classes by using an autoloading implementation that follows the PSR-4 recommendation.

We can create compositions of packages as follows:

class Pkgs
{
    public $stdenv;

    public function __construct()
    {
        $this->stdenv = new Pkgs\Stdenv();
    }

    public function fetchurl($args)
    {
        return Pkgs\Fetchurl::composePackage($this, $args);
    }

    public function hello()
    {
        return Pkgs\Hello::composePackage($this);
    }
}

As with the previous example, the composition example is a class. In this case, it exposes variants of packages by calling the functions with their required function arguments. In the above example, there is only one variant of the GNU Hello package. As a result, it suffices to just propagate the object itself as build parameters.

Contrary to the Nix expression language, we must expose each package composition as a method -- the Nix expression language is a lazy language that only invokes functions when their results are needed, PHP is an eager language that will evaluate them at construction time.

An implication of using eager evaluation is that opening the composition module, triggers all packages to be built. By wrapping the compositions into methods, we can make sure that only the requested packages are evaluated when needed.

Another practical implication of creating methods for each package composition is that it can become quite tedious if we have many of them. PHP offers a magic method named: __call() that gets invoked when we invoke a method that does not exists. We can use this magic method to automatically compose a package based on the method name:

public function __call($name, $arguments)
{
    // Compose the classname from the function name
    $className = ucfirst($name);
    // Compose the name of the method to compose the package
    $methodName = 'Pkgs\\'.$className.'::composePackage';
    // Prepend $this so that it becomes the first function parameter
    array_unshift($arguments, $this);
    // Dynamically the invoke the class' composition method with $this as first parameter and the remaining parameters
    return call_user_func_array($methodName, $arguments);
}

The above method takes the (non-existent) method name, converts it into the corresponding class name (by using the camel case naming convention), invokes the package's composition method using the composition object itself as a first parameter, and any other method parameters as successive parameters.

Converting PHP language constructs into Nix language constructs


Everything that PNDP does boils down to the phpToNix() function that automatically converts most PHP language constructs into semantically equivalent or similar Nix language constructs. For example, the following PHP language constructs are converted to Nix as follows:

  • A variable of type boolean, integer or double are converted verbatim.
  • A string will be converted into a string in the Nix expression language, and conflicting characters, such as the backslash and double quote, will be escaped.
  • In PHP, arrays can be sequential (when all elements have numeric keys that appear in numeric order) or associative in the remainder of the cases. The generator tries to detect what kind of array we have. It recursively converts sequential arrays into Nix lists of Nix language elements, and associative arrays into Nix attribute sets.
  • An object that is an instance of a class, will be converted into a Nix attribute set exposing its public properties.
  • A NULL reference gets converted into a Nix null value.
  • Variables that have an unknown type or are a resource will throw an exception.

As with NiJS (and JavaScript), the PHP host language does not provide equivalents for all Nix language constructs, such as values of the URL type, or encoding Nix function definitions.

You can still generate these objects by composing an abstract syntax from objects that are instances of the NixObject class. For example, when composing a NixURL object, we can generate a value of the URL type in the Nix expression language.

Arrays are a bit confusing in PHP, because you do not always know in advance whether it would yield a list or attribute set. To make these conversions explicit and prevent generation errors, they can be wrapped inside a NixList or NixAttrSet object.

Building packages programmatically


The PNDPBuild::callNixBuild() function can be used to build a generated Nix expression, such as the GNU Hello example shown earlier:

/* Evaluate the package */
$expr = PNDPBuild::evaluatePackage("Pkgs.php", "hello", false);

/* Call nix-build */
PNDPBuild::callNixBuild($expr, array());

In the code fragment above, we open the composition class file, named: Pkgs.php and we evaluate the hello() method to generate the Nix expression. Finally, we call the callNixBuild() function, in which we evaluate the generated expression by the Nix package manager. When the build succeeds, the resulting Nix store path is printed on the standard output.

Building packages from the command-line


As the previous code example is so common, there is also a command-line utility that can execute the same task. The following instruction builds the GNU Hello package from the composition class (Pkgs.php):

$ pndp-build -f Pkgs.php -A hello

It may also be useful to see what kind of Nix expression is generated for debugging or testing purposes. The --eval-only option prints the generated Nix expression on the standard output:

$ pndp-build -f Pkgs.js -A hello --eval-only

We can also nicely format the generated expression to improve readability:

$ pndp-build -f Pkgs.js -A hello --eval-only --format

Discussion


In this blog post, I have described PNDP: an internal DSL for Nix in PHP.

PNDP is not the first internal DSL I have developed for Nix. A couple of years ago, I also wrote NiJS: an internal DSL in JavaScript. PNDP shares a lot of concepts and implementation details with NiJS.

Contrary to NiJS, the functionality of PNDP is much more limited -- I have developed PNDP mainly for code generation purposes. In NiJS, I have also been exploring the abilities of the JavaScript language, such as exposing JavaScript functions in the Nix expression language, and the possibilities of an internal DSL, such as creating an interpreter that makes it a primitive standalone package manager. In PNDP, all this extra functionality is missing, since I have no practical need for them.

In a future blog post, I will describe an application that uses PNDP as a generator.

Availability


PNDP can obtained from Packagist as well as my GitHub page. It can be used under the terms and conditions of the MIT license.

Wednesday, August 30, 2017

A checklist of minimalistic layout considerations for web applications

As explained in my previous blog post, I used to be quite interested in web technology and spend considerable amounts of time developing my own framework providing solutions for common problems that I used to face, such as layout management and data management.

Another challenge that you cannot avoid is the visual appearance of your web application. Today, there are many frameworks and libraries available allowing you to do impressive things, such as animated transitions, fade in/fade out effects and so on.

Unfortunately, many of these "modern" solutions also have number of big drawbacks -- typically, they are big and complex JavaScript-based frameworks significantly increasing the download size of pages and the amount of required system resources (e.g. CPU, GPU, battery power) to render a page. As a result, it is not uncommon that the download size of many web sites equal the Doom video game and may feel slow and sluggish.

Some people (such as the author of this satirical website) suggest that most (all?) visual aspects are unnecessary and that simply a "vanilla" page displaying information suffices to provide a user what he needs. I do not entirely agree with this viewpoint as many web sites are not just collections of pages but complex information systems. Complex information systems require some means of organizing data including a layout that reflects this, such as menu panels that guide a user through the desired sets of information.

I know many kinds of tricks (and hacks) to implement layouts and visual aspects, but one thing that I am not particularly good at is designing visuals myself. There is a big pool of layout considerations to choose from and many combinations that can be made. Some combinations of visual aspects are good, others are bad -- I simply lack the intuition to make the right choices. This does not apply to web design only -- when I had to choose furniture for my house I basically suffered from the same problem. As a result, I have created (by accident) a couple of things that look nice and other things that look dreadful :-)

Although I have worked with designers, it is not always possible to consult one, in particular for non-commercial/more technical projects.

In this blog post, I have gathered a number of minimalistic layout considerations that, regardless of the objective of the system and the absence of design intuition, I find worth to consider, with some supporting information so that rational decisions can be made. Furthermore, they are relatively simple to apply and do not require any framework.

Text size


A web application's primary purpose is providing information. As a consequence, how you present text is very important.

A number of studies (such as this one by Jakob Nielsen in 1997) show that visitors of web pages do not really read, but scan for information. Although this study was done many years ago it still applies to today's screens, such as tablets and devices that were designed for reading books, such as the Kindle.

Because users typically read much slower from a screen than from paper and hardly read sections entirely, it is a very good idea to pick a font size that is large enough.

In most browsers, the default font size is set to 16 pixels. Studies suggest that this size is all but too small, in particular for high resolution screens that we use nowadays. Moreover, a font size of 16px on modern screens, is somewhat equal to the font size of a (physical) book.

Moreover, CSS allows you to define the font sizes statically or relatively. In my opinion (supported by this article), it is a good practice to use relative font sizes, because it is a good habit to allow users to control the size of the fonts.

For me, typically the following CSS setting suffices:

body
{
    font-size: 100%;
}

Spacing


When I was younger, I had the tendency to put as much information on a page as possible, such as densily written text, images and tables. At some point, I worked with a designer who constantly reminded me that I should keep enough space between page elements.

(As a sidenote: space should not necessarily be white space, but could also be the background color or background gradient. Some designers call this kind of space "negative spacing").

Why is sufficient negative spacing a good thing? According to some sources, filling a page with too many details, such as images, makes it difficult to maintain a user's attention. For example, if a page contains too much graphics or have colors that appear unrelated to the rest of the page, a user will quickly skip details.

Furthermore, some studies suggest that negative spacing is an effective way to emphasize important elements of a page and a very effective way to direct the flow of a page.

From an implementation perspective, there are many kinds of HTML elements that could be adjusted (from the browser's default settings) to use a bit of extra spacing, such as:

  • Text: I typically adjust the line-height to 1.2em increasing the space between each sentence of a paragraph.
  • For divs and other elements that define sections, I increase their margins and paddings. Typically I would set them to at least: 1em.
  • For table and table cells I increase their paddings to 0.5em.
  • For preformatted text: pre I use a padding value of at least 1em.
  • For list items (entries of an unordered or ordered list) I add a bit of extra spacing on top, e.g. li { margin: 0.2em 0; }.

Some people would argue that the above list of adjustments are still quite conservative and even more spacing should be considered.

Font color/foreground contrast


Another important thing is to pick the right foreground colors (such as the text color) and ensure that they have sufficient contrast -- for example, light grey colored text on a white background is very difficult for users to read.

In most browsers, the default setting is to have a white background and black colored text. Although this color scheme maximizes contrast, too much contrast also has a disadvantage -- it maximizes a user's attention span for a while, but a user cannot maintain such a level of attention indefinitely.

When displaying longer portions of text, it is typically better to lower the contrast a bit, but not too much. For example, when I want to display black colored text on a white background, I tune down the contrast a bit by setting the text color to dark grey: #444; as opposed to black: #000; and the background color to very light grey: #ddd; as opposed to white: #fff;.

Defining panels and sections


A web application is typically much more than just a single page of content. Typically, they are complex information systems displaying many kinds of data. This data has to be to be divided, categorized and structured. As a result, we also need facilities that guide the users through these sets of information, such as menu and content panels.

Creating a layout with "panels" turns out to be quite a complex problem, for which various kinds of strategies exist each having their pros and cons. For example, I have used the following techniques:

  • Absolute positioning. We add the property: position: absolute; to a div and we use the left, right, top and bottom properties to specify the coordinates of the top left and bottom right position of the panel. As a result, the div automatically gets positioned (relative to the top left position of the screen) and automatically gets a width and height. For the sections that need to expand, e.g. the contents panel displaying the text, we use the overflow: auto; property to enable vertical scroll bars if needed.

    Although this strategy seems to do mostly what I want, it also has a number of drawbacks -- the position of the panels is fixed. For desktops this is usually fine, but for screens with a limited height (such as mobile devices and tablets) this is quite impractical.

    Moreover, it used to be a big problem when Internet Explorer 6 was still the dominant browser -- IE6 did not implement the property that automatically derives the width and height, requiring me to implement a workaround stylesheet using JavaScript to compute the width and heights.
  • Floating divs is a strategy that is more friendly to displays with a limited height but has different kinds of challenges. Basically, by adding a float property to each panel, e.g. float: left; and specifying a width we can position columns next to each other. By using a clear hack, such as: <div style="clear: both;"></div> we can position a panel right beneath another panel.

    This strategy mostly makes sense despite the fact that the behaviour of floating elements in somewhat strange. One of the things that is very hard to solve is to create a layout in which columns have an equal height when their heights are not known in advance -- someone has written a huge blog post with all kinds of strategies. I, for example, implemented the "One True Layout Method" (a margin-padding-overflow hack) quite often.
  • Flexbox is yet another (more modern) alternative and the most powerful solution IMO so far. It allows you to consicely specify the how divs should be positioned, wrapped and sized. The only downside I see so far, is that these properties are relatively new and require very new implementations of browser layout engines. Many users typically do not bother that much to upgrade their browsers, unless they are forced.

Resolution flexibility (a.k.a. responsive web design)


When I just gained access to the Internet, nearly all web page visitors used to be desktop users. Today, however, a substantial number of visitors use different kinds of devices, having small screens (such as phones and tablets) and very big screens (such as TVs).

To give all these visitors a relatively good user experience, it is important to make the layout flexible enough to support many kinds of resolutions. Some people call this kind of flexibility responsive web design, but I find that term somewhat misleading.

Besides the considerations shown earlier, I also typically implement the following aspects in order to become even more flexible:

  • Eliminating vertical menu panels. When we have two levels of menu items, I typically display the secondary menu panel on the left on the screen. For desktops (and bigger screens) this is typically OK, but for smaller displays it eats too much space from the section that displays the contents. I typically use media queries to reconfigure these panels in such a way that the items on these menu panels are aligned horizontally by default and, when the screen width is big enough, they will be aligned vertically.
  • Making images dynamically resizable. Images, such as photos in a gallery, may be too big to properly display on a smaller screen, such as a phone. Fortunately, by providing the max-width setting we can adjust the style of images in such a way that their maximum width never exceeds the screen size and their dimensions get scaled accordingly:

    img
    {
        max-width: 100%;
        width: auto;
        height: auto;
    }
    
  • Adding horizontal scroll bars, when needed. For some elements, it is difficult to resize them in such a way that they never exceed the screen width, such as sections of preformatted text which I typically use to display code fragments. For these kinds of elements I typically configure the overflow-x: auto; property so that horizontal scroll bars appear when needed.

Picking colors


In addition to the text and the background colors, we may also want to pick a couple of additional colors. For example, we need different colors for hyperlinks so that it becomes obvious to users what is clickable and what not, and whether a link has already been visited or not. Furthermore, providing distinct colors for the menu panels, headers, and buttons would be nice as well.

Unfortunately, when it comes to picking colors, my intuition lets me down completely -- I do not know what "nice colors" are or which colors fit best together. As a result, I always have a hard time making the right choices.

Recently, I discovered a concept called "color theory" that lets you decide, somewhat rationally, what colors to pick. The basic idea behind color theory is that you pick a base color and then apply a color scheme to get a number of complementary colors, such as:

  • Triadic. Composed of 3 colors on separate ends of the color spectrum.
  • Compound. One color is selected in the same area of the color spectrum and two colors are chosen from opposite ends of the color spectrum.
  • Analogous. Careful selection of colors in the same area of the color spectrum.

A particular handy online tool (provided by the article shown above) is paletton that seems to provide me good results -- it supports various color schemes, has a number of export functions (including CSS) and a nice preview function that shows you what a web page would look like if you apply the generated color scheme.

Unfortunately, I have not found any free/open-source solutions or a GIMP plugin allowing me to do the same.

Printing


Something that is typically overlooked by many developers is printing. For most interactive web sites this not too important, but for information systems including reservation systems, it is also a good practice to implement proper printing support.

I consider it to be a good practice to hide non-relevant panels, such as the menu panels:

@media only print
{
    #header, #menu
    {
        display: none;
    }
}

Furthermore, it is also a good practice to tune down the amount of colors a bit.

A simple example scenario


As an experiment to test the above listed concerns (e.g. text-size, foreground contrast, using paletton to implement color theory and NOT trusting my intuition), I ended up implementing the following "design":

This is what it looks on a desktop browser:


For printing, the panels and colors are removed:


The page is even functional in a text oriented browser (w3m):


By using media queries, we can adjust the positioning of the sub menu to make more space available for the contents panel on a mobile display (my Android phone):


The layout does not look shiny or fancy, but it appears functional to me, but hey, don't ask me too much since I simply lack the feeling for design. :-)

In my previous blog post, I have described my own custom web framework for which I released a number additional components. I have also implemented an example repository with simple demonstration applications. I have decided to use my "rationally crafted layout" to make them look a bit prettier.

Conclusion


In this blog post, I have gathered a collection of minimalistic layout considerations for myself that, regardless of the objective of a web application, I find worth to consider. Moreover, these concerns can be applied mostly in a rational way, which is good for people like me who lack design intuition.

Although I have described many aspects in this blog post, applying the above considerations will not produce any fancy/shiny web pages. For example, you can implement many additional visual aspects, such as rounded corners, shadows, gradients, animated transitions, fade-ins and fade-outs, and so on.

Moreover, from writing this blog post I learned a thing or two about usability and HTML/CSS tricks. However, I again observed that the more you know, the more you realize that you do not know. For example, there are even more specialized studies available, such as one about the psychology of colors that, for example, shows that women prefer blue, purple, and green, and men prefer blue, green, and black.

This, however, is where I draw the line when it comes to learning design skills and made me realize why I do not prefer to become a front-end/design specialist. Knowing some things about design is always useful, but the domain is much deeper and complex than I initially thought.

The only thing that I still like to do is finding additional, simple, rationally applicable design considerations. Does anyone have some additional suggestions for me?

Sunday, July 2, 2017

Some reflections on my experiences with web technology

It has been a while since I wrote my last blog post. In the last couple of months, I have been working on many kinds of things, such as resurrecting the most important components of my personal web framework (that I have developed many years ago) and making them publicly available on GitHub.

There are a variety of reasons for me to temporarily switch back to this technology area for a brief period of time -- foremost, there are a couple of web sites still using pieces of my custom framework, such as those related to my voluntary work. I recently had to make changes, mostly maintenance-related, to these systems.

The funny thing is that most people do not consider me a web development (or front-end) person and this has an interesting history -- many years ago (before I started doing research) I always used to refer to "web technology" as one of my main technical interests. Gradually, my interest started to fade, up until the point that I stopped mentioning it.

I started this blog somewhere in the middle of my research, mainly to provide additional practical information. Furthermore, I have been using my blog to report on everything I do open-source related.

If I would have started this blog several years earlier, then many articles would have been related to web technology. Back then, I have spent considerable amounts of time investigating techniques, problems and solutions. Retrospectively, I regret that it took me so long to make writing a recurring habit as part of my work -- many interesting discoveries were never documented and have become forgotten knowledge.

In this blog post, I will reflect over my web programming experiences, describe some of the challenges I used to face and what solutions I implemented.

In the beginning: everything looked great


I vividly remember the early days in which I was just introduced to the internet, somewhere in the mid 90s. Around that time Netscape Navigator 2.0 was still the dominant and most advanced web browser.

Furthermore, the things I could do on the internet were very limited -- today we are connected to the internet almost 24 hours a day (mainly because many of us have smart phones allowing us to do so), but back then I only had a very slow 33K6 dial-up modem and internet access for only one hour a week.

Aside from the fact that it was quite an amazing experience to be connected to the world despite these limitations, I was also impressed by the underlying technology to construct web sites. It did not take long for me to experiment with these technologies myself, in particular HTML.

Quite quickly I was able to construct a personal web page whose purpose was simply to display some basic information about myself. Roughly, what I did was something like this:

<html>
  <head>
    <title>My homepage</title>
  </head>

  <body bgcolor="#ff0000" text="#000000">
    <h1>Hello world!</h1>

    <p>
      Hello, this is my homepage.
    </p>
    <p>
      <img src="image.jpg" alt="Image">
    </p>
  </body>
</html>

It was simply a web page with a red colored background displaying some text, hyperlinks and images:


You may probably wonder what is so special about building a web page with a dreadful background color, but before I was introduced to web technology, my programming experience was limited to various flavours of BASIC (such as Commodore 64, AMOS, GW and Quick BASIC), Visual Basic, 6502 assembly and Turbo Pascal.

Building user interfaces with these kind of technologies was quite tedious and somewhat impractical compared to using web technology -- for example, you had to programmatically define your user interface elements, size them, position them, define style properties for each individual element and programming event handlers to respond to user events, such as mouse clicks.

With web technology this suddenly became quite easy and convenient -- I could now concisely express what I wanted and the browser took care of the rendering parts.

Unknowingly, I was introduced to a discipline called declarative programming -- I could describe what I wanted as opposed to specifying how to do something. Writing applications declaratively had all kinds of advantages beyond the ability to express things concisely.

For example, because HTML code is high-level (not entirely, but is supposed to be), it does not really matter much what browser application you use (or underlying platform, such as the operating system) making your application quite portable. Rendering a paragraph, a link, image or button can be done on many kinds of different platforms from the same specification.

Another powerful property is that your application can degrade gracefully. For example, when using a text-oriented browser, your web application should still be usable without the ability to display graphics. The alternate text (alt) attribute of the image element should ensure that the image description is still visible. Even when no visualization is possible (e.g. for visually impaired people), you could, for example, use a Text to Speech system to interpret your pages' content.

Moreover, the introduction of Cascading Style Sheets (CSS) made it possible to separate the style concern from the page structure and contents making the code of your web page much more concise. Before CSS, extensive use of presentational tags could still make your code quite messy. Separation of the style concern also made it possible to replace the stylesheet without modifying the HTML code to easily give your page a different appearance.

To deal with visual discrepancies between browser implementations, HTML was standardized and standards-mode rendering was introduced when an HTML doctype was added to a HTML file.

What went wrong?


I have described a number of appealing traits of web technology in the previous section -- programming applications declaratively from a high level perspective, separation of concerns, portability because of high-level abstractions and standardization, the ability to degrade gracefully and conveniently making your application available to the world.

What could possibly be the reason to lose passion while this technology has so many compelling properties?

I have a long list of anecdotes, but most of my reasons can be categorized as follows:

There are many complex additional concerns


Most web technologies (e.g. HTML, CSS, JavaScript) provide solutions for the front-end, mainly to serve and render pages, but many web applications are much more than simply a collection of pages -- they are in fact complex information systems.

To build information systems, we have to deal with many additional concerns, such as:

  • Data management. User provided data must be validated, stored, transformed in something that can be visually represented, properly escaped so that they can be inserted into a database (preventing SQL injections).
  • Security. User permissions must be validated on all kinds of levels, such as page level, or section level. User roles must be defined. Secure connections must be established by using the SSL protocol.
  • Scalability. When your system has many users, it will no longer be possible to serve your web application from a single web server because it lacks sufficient system resources. Your system must be decomposed and optimized (e.g. by using caching).

In my experience, the tech companies I used to work for understood these issues, but I also ran into many kinds of situations with non-technical people not understanding that all these things were complicated and necessary.

One time, I even ran into somebody saying: "Well, you can export Microsoft Word documents to HTML pages, right? Why should things be so difficult?".

There is a lack of abstraction facilities in HTML


As explained earlier, programming with HTML and CSS could be considered declarative programming. At the same time, declarative programming is a spectrum -- it is difficult to draw a hard line between what and how -- it all depends on the context.

The same thing applies to HTML -- from one perspective, HTML code can be considered a "what specification" since you do not have specify how to render a paragraph, image or a button.

In other cases, you may want to do things that cannot be directly expressed in HTML, such as embedding a photo gallery on your web page -- there is no HTML facility allowing you to concisely express that. Instead, you must provide the corresponding HTML elements that implement the gallery, such as the divisions, paragraphs, forms and images. Furthermore, HTML does not provide you any facilities to define such abstractions yourself.

If there are many recurring high level concepts to implement, you may end up copying and pasting large portions of HTML code between pages making it much more difficult to modify and maintain the application.

A consequence of not being able to define custom abstractions in HTML is that it has become very common to generate pages server side. Although this suffices to get most jobs done, generating dynamic content is many times more expensive than serving static pages, which is quite silly if you think too much about it.

A very common sever side abstraction I used to implement (in addition to an embedded gallery) is a layout manager allowing you to manage static common sections, a menu structure and dynamic content sections. I ended up inventing such a component because I was used to frames, that became deprecated. Moving away from them required me to reimplement common sections of a page over and over again.

In addition to generated code, using JavaScript also has become quite common by dynamically injecting code into the DOM or transforming elements. As a result, quite a few pages will not function properly when JavaScript has been disabled or when JavaScript in unsupported.

Moreover, many pages embed a substantial amount of JavaScript significantly increasing their sizes. A study reveals that the total size of a quite a few modern web pages are equal to the Doom video game.

There is a conceptual mismatch between 'pages' and 'screens'


HTML is a language designed for constructing pages, not screens. However, information systems typically require a screen-based workflow -- users need to modify data, send their change requests to the server and update their views so that their modifications become visible.

In HTML, there are only two ways to propagate parameters to the server and getting feedback -- with hyperlinks (containing GET parameters) or forms. In both cases, a user gets redirected to another page that should display the result of the action.

For pages displaying tabular data or other complicated data structures, this is quite inconvenient -- we have to rerender the entire page each time we change something and scroll the user back to the location where the change was made (e.g. by defining anchors).

Again, with JavaScript this problem can be solved in a more proper and efficient way -- by programming an event handler (such as a handler for the click event), using the XMLHttpRequest object to send a change message to the server and updating the appropriate DOM elements, we can rerender only the affected parts.

Unfortunately, this again breaks the declarative nature of web technologies and the ability to degrade gracefully -- in a browser that lacks JavaScript support (e.g. text-oriented browsers) this solution will not work.

Also, efficient state management is a complicated problem, that you may want to solve by integrating third party JavaScript libraries, such as MobX or Redux.

Layouts are extremely difficult


In addition to the absence of abstractions in HTML (motivating me to develop a layout manager), implementing layouts in general is also something I consider to be notoriously difficult. Moreover, the layout concern is not well separated -- some aspects need to be done in your page structure (HTML code) and other aspects need to be done in stylesheets.

Although changing most visual properties of page elements in CSS is straight forward (e.g. adjusting the color of the background, text or borders), dealing with layout related aspects (e.g. sizing and positioning page elements) is not. In many cases I had to rely on clever tricks and hacks.

One of the weirdest recurring layout-tricks I used to implement is a hack to make the height of two adjacent floating divs equal. This is something I commonly used to put a menu panel next to a content panel displaying text and images. You do not know in advance the height of both panels.

I ended up solving this problems as follows. I wrapped the divs in a container div:

<div id="container">
  <div id="left-column">
  </div>
  <div id="right-column">
  </div>
</div>

and I provided the following CSS stylesheet:

#container
{
    overflow: hidden;
}

#left-column
{
    padding-bottom: 3000px;
    margin-bottom: -3000px;
}

#right-column
{
    padding-bottom: 3000px;
    margin-bottom: -3000px;
}

In the container div, I abuse the overflow property (disabling scroll bars if the height exceeds the screen size). For the panels themselves, I use a large padding value value and a equivalent negative margin. The latter hack causes the panels to stretch in such a way that their heights become equal:


(As a sidenote: the above problem can now be solved in a better way using a flexbox layout, but a couple of years ago you could not use this newer CSS feature).

The example shown above is not an exception. Another notable trick is the clear hack (e.g. <div style="clear: both;"></div>) to ensure that the height of the surrounding div grows automatically with the height of your inner divs.

As usual, JavaScript can be used to solved to abstract these oddities away, but breaks declarativity. Furthermore, when JavaScript is used for an essential part of your layout, your page will look weird if JavaScript has been disabled.

Interoperability problems


Many web technologies have been standardized by the World Wide Web Consortium (W3C) with the purpose to ensure interoperability among browsers. The W3C also provides online validator services (e.g. for HTML and CSS) that you can use to upload your code and check for its validity.

As an outsider, you may probably expect that if your uploaded code passes validation that your web application front-end is interoperable and will work properly in all browsers... wrong!

For quite some time, Internet Explorer 6 was the most dominant web browser. Around the time that it was released (2001) it completely crushed its biggest competitor (Netscape) and gained 95% market share. Unfortunately, it did not support modern web standards well (e.g. CSS 2 and newer). After winning the browser wars, Microsoft pretty much stopped its development.

Other browsers kept progressing and started to become much more advanced (most notably Mozilla Firefox). They also followed the web standards more faithfully. Although this was a good development, the sad thing was that in 2007 (6 years later) Internet Explorer 6 was still the most dominant browser with its lacking support of standards and many conformance bugs -- as a result, you were forced to implement painful IE-specific workarounds to make your web page work properly.

What I typically used to do is that I implemented a web page for "decent browsers" first (e.g. Firefox, Chrome, Safari), and then added Internet Explorer-specific workarounds (such as additional stylesheets) on top. By using conditional comments, an Internet Explorer-specific feature that treated certain comments as code, I could ensure that the hacks were not used by any non-IE browser. An example usage case is:

<!--[if lt IE 7]><link rel="stylesheet" type="text/css" href="ie-hacks.css"><![endif]-->

The above conditional comment states that if an Internet Explorer version lower than 7 is used, then the provided ie-hacks.css stylesheet should be used. Otherwise, it is treated as a comment and will be ignored.

Fortunately, Google Chrome overtook the role as the most dominant web browser and is developed more progressively eliminating most standardization problems. Interoperability today is still not a strong guarantee, in particular for new technologies, but considerably better around the time that Internet Explorer still dominated the browser market.

Stakeholder difficulties


Another major challenge are the stakeholders with their different and somewhat conflicting interests.

The most important thing that matters to the end-user is that a web application provides the information they need, that it can be conveniently found (e.g. through a search engine), and that they are not distracted too much. Visual appearance of your web site also matters in some extent (e.g. a dreadful appearance your web site will affect an end user's ability to find what they need and your credibility), but is typically not as important as most people think.

Most clients (the group of people requesting your services) are mostly concerned with visual effects and features, and not so much with the information they need to provide to their audience.

I still vividly remember a web application that I developed whose contents could be fully managed by the end user with a simple HTML editor. I deliberately kept the editor's functionality simple -- for example, it was not possible in the editor to adjust the style (e.g. the color of the text or background), because I believed that users should simply follow the stylesheet.

Moreover, I have spent substantial amounts of time explaining clients how to write for the web -- they need to structure/organize their information properly write concisely and structure/format their text.

Despite all my efforts in bridging the gap between end-users and clients, I still remember that one particular client ran away dissatisfied because of the lack of customization. He moved to a more powerful/flexible CMS, and his new homepage looked quite horrible -- ugly background images, dreadful text colors, sentences capitalized, e.g.: "WELCOME TO MY HOMEPAGE!!!!!!".

I also had been in a situation once in which I had to deal with two rivaling factions in a organization -- one being very supportive with my ideas and another being completely against them. They did not communicate with each other much and I basically had to serve as a proxy between them.

Also, I have worked with designers giving me mixed experiences. A substantial group of designers basically assumed that a web page design is the same thing as a paper design, e.g. a booklet.

With a small number of them I had quite a few difficulties explaining that web page designs need to be flexible and working towards a solution meeting these criteria -- people use different kinds of screen sizes, resolutions. People tend to resize their windows, adjust their font sizes, and so on. Making a design that is too static will affect how many users you will attract.

Technology fashions


Web technology is quite sensitive to technological fashions -- every day new frameworks, libraries and tools appear, sometimes for relatively new and uncommon programming languages.

While new technology typically provides added value, you can also quite easily shoot yourself in the foot. Being forced to work around a system's broken foundation is not particularly a fun job.

Two of my favorite examples of questionable technology adoptions are:

  • No SQL databases. At some point, probably because of success stories from Google, a lot of people consider traditional relational databases (using SQL) not to be "web scale" and massively shifted to so-called "No SQL databases". Some well known NoSQL databases (e.g. MongoDB) sacrifice properties in service of speed -- such as consistency guarantees.

    In my own experience, many applications that I developed, the relational model made perfect sense. Also, consistency guarantees were way more important than speed benefits. As a matter of fact, most of my applications were fast enough. (As a sidenote: This does not disqualify NoSQL databases -- they have legitimate use cases, but in many cases they are simply not needed).
  • Single threaded event loop server applications (e.g. applications built on Node.js).

    The single thread event loop model has certain kinds of benefits over the traditional thread/process per connection approach -- little overhead in memory and multitasking making it a much better fit to handle large numbers of connections.

    Unfortunately, most people do not realize that there are two sides of the coin -- in a single threaded event loop, the programmer has the responsibility to make sure that it never blocks so that that your application remains responsive (I have seen quite a few programmers who simply lack the understanding and discipline to do that).

    Furthermore, whenever something unexpected goes wrong you end up with an application that crashes completely making it much more sensitive to disruptions. Also, this model is not a very good fit for computationally intensive applications.

    (Again: I am not trying to disqualify Node.js or the single threaded event loop concept -- they have legitimate use cases and benefits, but it is not always a good fit for all kinds of applications).

My own web framework


I have been extensively developing a custom PHP-based web framework between 2002 and 2009. Most of its ideas were born while I was developing a web-based information system to manage a documentation library for a side job. I noticed that there were quite a few complexities that I had to overcome to make it work properly, such as database integration, user authentication, and data validation.

After completing the documentation library, I had been developing a number of similarly looking information systems with similar functionality. Over the years, I learned more techniques, recognized common patterns, captured abstractions, and kept improving my personal framework. Moreover I had been using my framework for a variety of additional use cases including web sites for small companies.

In the end, it evolved into a framework providing the following high level components (the boxes in the diagram denote packages, while the arrows denote dependency relationships):


  • php-sbdata. This package can be used to validate data fields and present data fields. It can also manage collections of data fields as forms and tables. Originally this package was only used for presentation, but based on my experience with WebDSL I have also integrated validation.
  • php-sbeditor. This package provides an HTML editor implementation that can be embedded into a web page. It can also optionally integrate with the data framework to expose a field as an HTML editor. When JavaScript is unsupported or disabled, it will fall back to a text area in which the user can directly edit HTML.
  • php-sblayout. This package provides a layout manager that can be used to manage common sections, the menu structure and dynamic sections of a page. A couple of years ago, I wrote a blog post explaining how it came about. In addition to a PHP package, I also created a Java Servlet/JSP implementation of the same concepts.
  • php-sbcrud is my partial solution to the page-screen mismatch problem and combines the concepts of the data management and layout management packages.

    Using the CRUD manager, every data element and data collection has its own URL, such as http://localhost/index.php/books to display a collection of books and http://localhost/index.php/books/1 to display an individual book. By default, data is displayed in view mode. Modifications can be made by appending GET parameters to the URL, such as: http://localhost/index.php/books/1?__operation=remove_book.
  • php-sbgallery provides an embeddable gallery sub application that can be embedded in various ways -- directly in a page, as a collection of sub pages via the layout manager, in an HTML editor, and as a page manager allowing you to expose albums of the gallery as sub pages in a web application.
  • php-sbpagemanager extends the layout manager with the ability to dynamically manage pages. The page manager can be used to allow end-users to manage the page structure and contents of a web application. It also embeds a picture gallery so that users can manage the images to be displayed.
  • php-sbbiblio is a library I created to display my bibliography on my personal homepage while I was doing my PhD research.

For nowadays' standards the features provided by the above package are considered to be old fashioned. Still, I am particularly proud of the following quality properties (some people may consider them anti-features these days):

  • Being as declarative as possible. This means that the usage of JavaScript is minimized and non essential. Although it may not efficiently deal with the page-screen mismatch because of this deliberate choice, it does provide other benefits. For example, the system is still usable when JavaScript has been disabled and even works in text-oriented browsers.
  • Small page sizes. The layout manager allows you to conveniently separate common aspects from page-specific aspects, including external stylesheets and scripts. As a result, the size of the rendered pages are relatively small (in particular compared to many modern web sites), making page loading times fast.
  • Very thin data layer. The data manager basically works with primitive values, associative arrays and PDO. It has no strong ties to any database model or an Object-Relational-Mapper (ORM). Although this may be inconvenient from a productivity point of view, the little overhead ensures that your application is fast.

Conclusion


In this blog post, I have explained where my initial enthusiasm for the web came from, my experiences (including the negative ones), and my own framework.

The fact that I am not as passionate about the web anymore did not make me leave that domain -- web-based systems these days are ubiquitous. Today, much of the work I do is system configuration, back-end and architecture related. I am not so active on the front-end side anymore, but I still look at front-end related issues from time to time.

Moreover, I have used PHP for a very long time (in 2002 there were basically not that many appealing alternatives), but I have also used many other technologies such a Java Servlets/JSP, Node.js, and Django. Moreover, I have also used many client-side frameworks, such as Angular and React.

I was also briefly involved with the development of WebDSL, an ongoing project in my former research group, but my contributions were mostly system configuration management related.

Although these technologies all offer nice features and in some way impress me, it has been a very long time that I really felt enthusiastic about anything web related.

Availability


Three years ago, I have already published the data manager, layout manager and bibliography packages on my GitHub page. I have now also published the remaining components. They can be used under the terms and conditions of the Apache Software License version 2.0.

In addition to the framework components, I published a repository with a number of example applications that have comparable features to the information systems I used to implement. The example applications share the same authentication system and can be combined together through the portal application. The example applications are GPLv3 licensed.

You may probably wonder why I published these packages after such a long time? The are variety of reasons -- I always had the intention to make it open, but when I was younger I focused myself mostly on code, not additional concerns such as documenting how the API should be used or providing example cases.

Moreover, in 2002 platforms such as GitHub did not exist yet (there was Sourceforge, but it worked on a project-level and was not as convenient) so it was very hard to publish something properly.

Finally, there are always things to improve and I always run various kinds of experiments. Typically, I use my own projects as test subjects for other projects. I also have a couple of open ideas where I can use pieces of my web framework for. More about this later.