Harald Kirsch

genug Unfug.


Time is what it is

$ \def\Vec#1{\mathbf{#1}} \def\vt#1{\Vec{v}_{#1}(t)} \def\v#1{\Vec{v}_{#1}} \def\av{\bar{\Vec{v}}} $

If you search the Internet for an explanation of what time is, there are a lot of sites with a lot of words basically saying that they cannot say anything about time. Here I have a different approach in that I ask you to compare two simple formulas and let you draw your own conclusions.

The speed of light

Given an observer whose local time is $t$ and an observed object passing by with a velocity of $\Vec{v}(t)\in\mathbb{R}^3$, Special Relativity tells us that for the proper time $\tau$ of the object the following holds: \begin{equation}\label{eq:srt} c^2 = |\Vec{v}(t)|^2 + \left(\frac{c\,d\tau(t)}{dt} \right)^2 \end{equation} where $c$ is the speed of light and $|\cdot|$ denotes the absolute value of a vector.

The term $d\tau/dt$ could be called the speed of aging of the object with respect to the observer. If the object is at rest with respect to the observer, i.e. $|\Vec{v}(t)|=0$, then $d\tau/dt = 1$, meaning the observer and the object age with the same speed.

The other extreme case is where the relative velocity approaches the speed of light $c$ and $d\tau/dt\to 0$ meaning that with respect to the observer, the object is not aging anymore.

Average velocity of an ensemble of points

The second formula describes the average velocity of an ensemble of $n$ points. For each point particle $0\leq i<n$, its velocity shall be $\vt{i}$. Further, all absolute values of the velocities shall be identical, namely $|\vt{i}|=c$. The average velocity of the ensemble is then \begin{equation} \av(t) = \frac{1}{n}\sum_{i=0}^{n-1} \vt{i} . \end{equation} For brevity we leave out the dependence on $t$ for now as we compute the difference $c^2-\av^2$. \begin{align} c^2-\av^2 &= c^2 - \frac{1}{n^2} \left(\sum_{i=0}^{n-1} \v{i}\right)^2 \\ &= c^2 - \frac{1}{n^2} \sum_{i,j=0}^{n-1} \v{i}\v{j} \\ \end{align} The last sum is symmetric in $i$ and $j$ and therefore contains each pair $\v{i}\v{j}$ twice, except where $i=j$. The latter, quadratic terms are extracted from the sum to arrive at: \begin{align} c^2-\av^2&= \underbrace{c^2 - \frac{1}{n^2}\sum_{i=0}^{n-1} \v{i}^2}_{(A)} - \frac{1}{n^2}\sum_{i<j} 2\v{i}\v{j} \end{align} Since $\v{i}^2=|\v{i}|^2=c^2$, the terms denoted as $(A)$ in the formula can be written as \begin{align} (A)&= c^2 -1/n^2\cdot n c^2 \\ &= c^2(1-1/n) \\ &= \frac{n-1}{n}c^2 \\ &= \frac{1}{2} n (n-1) \frac{2c^2}{n^2} \end{align} The term $1/2 n(n-1)= 1/2 (n^2-n)$ is the number of elements in the lower right half of a square matrix without the diagonal elements and is therefore the same as $\sum_{i<j} 1$, which we multiply by $\frac{2c^2}{n^2}$ and plug it into $(A)$ to get \begin{align} c^2 -\av^2 &= \sum_{i<j}\frac{2c^2}{n^2} - \frac{1}{n^2}\sum_{i<j} 2\v{i}\v{j}\\ &= \frac{1}{n^2} \sum_{i<j} c^2-2\v{i}\v{j}+c^2 \\ \scriptsize \text{(since $c^2=\v{i}^2$)}\qquad &= \frac{1}{n^2} \sum_{i<j} (\v{i}-\v{j})^2 . \label{eq:final} \end{align} What does this last equation mean? Remember, we are talking about an ensemble of $n$ point particles all moving at the speed of light. Lets envisage the particles trapped in a box with $n$ sufficiently large and their movements sufficiently random, then the velocity of the box, $\Vec{v}_B$ is a close approximation to $\av$, the average over the particle velocities.

Putting it together

Suppose the box is the observed object we talked about in the first section. Then, by comparing equation \eqref{eq:srt} with the last equation \eqref{eq:final}, we get \begin{equation} \left(\frac{c\,d\tau(t)}{dt} \right)^2 = c^2- |\Vec{v}_B|^2 \approx c^2 - \av^2 = \frac{1}{n^2} \sum_{i<j} (\v{i}-\v{j})^2 \end{equation}

  1. On the far left we have the term which describes the flow of time in the box with respect to the observer.
  2. On the far right we have a purely mechanical term, the average delta velocities of the point particles in the box.

Does this make sense?

Here comes the part that I have to leave to the reader, except that I throw in my opinion anyway: it does make a lot of sense to me. Consider the box moving faster and faster, until it reaches the speed of light. What happens to the point particles in the box? Since they move also at the speed of light along with the whole box, the delta velocities $\v{i}-\v{j}$ have to converge to zero. Consequently the relative positions of the points inside the box become constant: the content of the box is kind of frozen, nothing changes anymore. This is the situation where also $c\,d\tau/dt=0$, meaning that proper time comes to a halt.

My conclusion is that (proper) time is nothing but the integrated average of change happening in a closed system. For a (closed) system of point particles, change is simply the delta velocity between points.

Open questions

Of course there are many. My priority one question is whether a similar derivation is possible for a system "filled" with a changing field, such that $d\tau/dt$ turns out to be equal to some measure of change of the field.

$\def\v#1{\mathfrak{#1}} \def\vx{\v{x}} \def\vy{\v{y}} \def\vz{\v{z}} \def\mA#1#2{a_{#1 #2}} \def\ma#1#2{a^{#1}_{#2}} \def\t#1{\tilde #1} \def\tx{\t{x}} \def\ty{\t{y}} \def\d#1{\partial #1} \def\dd#1{\partial_{#1}} \def\pderiv#1{\frac{\partial}{\partial #1}} $


Contravariant, Covariant, Tensor

(IV) Differentiation is Covariant

Already the fourth part about tensors and such that I write down to understand this stuff myself, but others may benefit. For the notation, please read the first and second part.

Before I start, I would like to introduce a notation which I see seldom used, but which I find quite helpful. For a function $f:(x^1,\dots,x^n)\to K$ lets denote the derivative with regard to the $k$-th parameter as $\dd{k}f$. Normally this is denoted with $\d{f}/\d{x_k}$, but this gets confusing when we have something like $\d{f(2r,4s,7t)}/\d{x_2}$, where the arguments do not contain $x_2$. From the index of $x_2$ we can see that the partial derivative with regard to the second parameter is meant, but, well, I don't like it that way. Instead I will write $\dd{2}f(2r,4s,7t)$. To be more general, I define \begin{equation} \dd{k}f(x^1,\dots,x^n) := \pderiv{x_k}f(x^1,\dots,x^n) \end{equation} to denote the derivative with regard to the $k$-th argument, without applying the inner derivative. If the $k$-th argument is not just $x_k$, but some function of other parameters, we then write \begin{align*} \frac{d}{dt}f(\dots,g_k(t),\dots) &= \sum_{k=1}^n \dd{k}f(\dots,g_k(t),\dots)\cdot \frac{d}{dt}g_k(t) \end{align*}

That said, lets look at a function defined like $f$ above on the coordinates $x^i$ or $y^j$ of a vector $\vz=x^i\vx_i=y^j\vy_j$, using Einstein Notation again for summing over a pair of upper and lower indexes, where the $\vx_i$ and $\vy_j$ are basis vectors of two different bases $\v{X}$ and $\v{Y}$ of our vector space $V$. Since the function is defined on just the coordinates of a vector, the value of the function is typically not unique for a given vector, but differs with the basis used, i.e. except for special cases we have $$f(x^1,\dots,x^n)\neq f(y^1,\dots,y^n).$$ But lets look at the derivatives of $f$ with regard to the coordinates and remember that $x^i = \ma{i}{j} y^j$ for the basis change matrix $\ma{i}{j}$. \begin{align*} \pderiv{y_j}f(x^1,\dots,x^n) &= \pderiv{y_j} f(\ma{1}{k}y^k,\dots,\ma{n}{k}y^k) \\ &= \sum_{i=1}^n \dd{i}f(\ma{1}{k}y^k,\dots,\ma{n}{k}y^k) \cdot \pderiv{y_j} \ma{i}{k}y^k \\ &= \sum_{i=1}^n \dd{i}f(\ma{1}{k}y^k,\dots,\ma{n}{k}y^k) \cdot \ma{i}{j} \\ &= \sum_{i=1}^n \dd{i}f(x^1,\dots,x^n) \cdot \ma{i}{j} \\ &= \sum_{i=1}^n \ma{i}{j}\pderiv{x_i} f(x^1,\dots,x^n) \end{align*} Using Einstein Notation, this is $$ \pderiv{y_i}f(x^1,\dots,x^n) = \ma{i}{j}\pderiv{x_i} f(x^1,\dots,x^n) $$ where I very conciously do not remove the function arguments to let the formula look neat. Otherwise one could be tempted to think that this covariant transformation not only converts $\partial/\d{x_i}$ into $\partial/\d{y_j}$, but also magically replaces the $x^i$ with $y^j$ in the argument listl, which it does not.

The result shows that the differential operators $\partial/\d{x_i}$ form a basis dependent $n$-tupel which transforms covariantly. Hence it is correct that the index is a subscript.

$\def\v#1{\mathfrak{#1}} \def\vx{\v{x}} \def\vy{\v{y}} \def\vz{\v{z}} \def\mA#1#2{a_{#1 #2}} \def\ma#1#2{a^{#1}_{#2}} \def\t#1{\tilde #1} \def\tx{\t{x}} \def\ty{\t{y}} \def\d#1{\partial #1} \def\dd#1{\partial_{#1}} \def\pderiv#1{\frac{\partial}{\partial #1}} $


Contravariant, Covariant, Tensor

(III) Index Notation

If you were reading the previous two parts of this series in the hope to see indexes hop up and down between subscript and superscript, you may be disappointed. But don't dispair. Now that we understand that there exist two different types of basis-dependent $n$-tupels, it is time to talk about superscript indexes to distinguish contravariant tupels from covariant ones.

For the notation please refer to the previous blog post. To summarize, we have seen three transformations between vector space bases, \begin{equation*} \vy_j = \sum_{i=1}^n \mA{i}{j}\vx_i, \qquad x_i = \sum_{j=1}^n \mA{i}{j} y_j, \qquad \ty_j = \sum_{i=1}^n \mA{i}{j} \tx_i \end{equation*} where

  • the first is the basis vector transformation serving as a reference for the direction of the transformations,
  • the second is the transformation of coordinates of some vector $\vz\in V$, called contravariant because it goes in the opposite direction of the first, and
  • the third is the transformation of the values $\tx_i:=f(\vx_i)$ and $\ty_j:=f(\vy_j)$ of a linear form $f:V\to K$, called covariant because it goes into the same direction as the reference.

So there are basis-dependent $n$-tupels which are covariant and others which are contravariant. The simple idea is to distinguish them by raising the index for contravariant tuples to a a superscript. According to this rule, we must raise the index of coordinate values and from now on write them as $x^i$ and $y^i$ such that a vector $\vz$ now is written as \begin{equation} \vz = \sum_{i=1}^n x^i\vx_i = \sum_{j=1}^n y^j\vy_j . \label{eq:z} \end{equation}

And that was all? Not quite! Remember that for a fixed $j$ the $\mA{i}{j}$ are actually the coordinates of $\vy_j$ with regard to basis $\v{X}$. This means we must write, from now on $\ma{i}{j}$. Our transformations are then \begin{equation*} \vy_j = \sum_{i=1}^n \ma{i}{j}\vx_i, \qquad x^i = \sum_{j=1}^n \ma{i}{j} y^j, \qquad \ty_j= \sum_{i=1}^n \ma{i}{j} \tx_i \end{equation*} Strikingly, these three sums as well as the one in \eqref{eq:z} always zip up an upper with a lower index. And if this is the case then, as propsed by Einstein, the summation sign shall be left out. Having matching pairs of upper and lower index is enough to let us know that there is a sum over this index. In this Einstein notation, our three transformations can now be written as \begin{equation*} \vy_j = \ma{i}{j}\vx_i, \qquad x^i = \ma{i}{j} y^j, \qquad \ty_j= \ma{i}{j} \tx_i. \end{equation*} Very concise. This reminds me of programming languages, where some, like Java, are more verbose than others, like Scala or Perl. The tradeoff is that the less verbose a notation or language is, the more you need to know by heart. For the expert, verbosity is less efficient, while the beginner or even someone who has not used the notation for some time, may easily get lost.

But since the Einstein notation is so common I will use it too in the parts to come.

$\def\v#1{\mathfrak{#1}} \def\vx{\v{x}} \def\vy{\v{y}} \def\vz{\v{z}} \def\mA#1#2{a_{#1 #2}} \def\ma#1#2{a^{#1}_{#2}} \def\t#1{\tilde #1} \def\tx{\t{x}} \def\ty{\t{y}} \def\d#1{\partial #1} \def\dd#1{\partial_{#1}} \def\pderiv#1{\frac{\partial}{\partial #1}} $


Contravariant, Covariant, Tensor

(II) Covariance

After I understood where the term contravariant comes from, I am now ready to explain covariant. As before we have a vector space $V$ over a field $K$ with two bases \begin{align*} \v{X} &= (\vx_1,\dots,\vx_n), \qquad \vx_i\in V, \\ \v{Y} &= (\vy_1,\dots,\vy_n), \qquad \vy_j\in V \end{align*} and a set of $\mA{i}{j}\in K$ that transform $\v{X}$ into $\v{Y}$ according to \begin{equation} \vy_j= \sum_{i=1}^n \mA{i}{j}\vx_i . \label{eq:vy} \end{equation} Further we look at a linear form $f:V\to K$, i.e. a function from $V$ into $K$ that assigns an element $f(\vz)$ to each $\vz\in V$ and is linear. In particular $f$ provides us with two $n$-tupels $\tx_i:=f(\vx_i) \in K$ and $\ty_j:=f(\vy_j) \in K$, one for each of the bases.

This reminds of the coordinates of a vector $\vz\in V$ which are also $n$-tupels of values depending on the selected basis, and we can ask whether and how we transform the $\tx_i$ into the $\ty_j$. But this is not difficult: \begin{align*} \ty_j &= f(\vy_j) \\ &= f\left(\sum_{i=1}^n \mA{i}{j}\vx_i\right) & &&\text{by \eqref{eq:vy}} \\ &= \sum_{i=1}^n \mA{i}{j} f(\vx_i) \\ &= \sum_{i=1}^n \mA{i}{j} \tx_i. \end{align*} We see that the $\mA{i}{j}$ transform basis vectors $\vx_i$ into $\vy_j$ (see \eqref{eq:vy}) as well as the coefficients $\tx_i$ into $\ty_j$, hence these coefficients of a linear form $f$ transform in the same direction as the bases and are therefore covariant.

To summarize, the $\mA{i}{j}$ perform for us the following transformations:

  1. basis vector $\vx_i \longrightarrow \vy_j$ (reference)
  2. vector coordinates $y_j \longrightarrow x_i$ (contravariant, opposite direction of reference)
  3. linear form coefficients $\tx_i \longrightarrow \ty_j$ (covariant, same direction of reference)

If you were reading hoping to see indexes hop up and down between subscript and superscript, you may be disappointed, but don't dispair. I think that only now that we clearly understand that there exist two different types of basis-dependent $n$-tupels, it is time to talk about superscript indexes to differentiate contravariant tupels from covariant ones.

And the rules are simple. The components of a basis-dependent $n$-tupel have their index

  1. as a subscript, like $\tx_i$ and the basis vectors $\vx_i$ itself, if the $n$-tupel transforms covariant, and
  2. as a superscript like $x^i$, if the $n$-tupel is contravariant.
$\def\v#1{\mathfrak{#1}} \def\vx{\v{x}} \def\vy{\v{y}} \def\vz{\v{z}} \def\mA#1#2{a_{#1 #2}} \def\ma#1#2{a^{#1}_{#2}} \def\t#1{\tilde #1} \def\tx{\t{x}} \def\ty{\t{y}} \def\d#1{\partial #1} \def\dd#1{\partial_{#1}} \def\pderiv#1{\frac{\partial}{\partial #1}} $


Contravariant, Covariant, Tensor

(I) Contravariance

For some time now I struggled to understand what covariant and contravariant mean in the context of vectors and tensors. By writing this series of articles, I primarily explain the concepts to myself, but other may like it too.

I started reading Raum, Zeit, Materie from Hermann Weyl, a classic that explains these concepts really good. Well, at least after having read in other books and on Wikipedia about the topic, Weyl's book seems to have helped to overcome the last hurdle to understanding. Much more so anyway than statements like "a tensor is something that transforms like a tensor" in an otherwise nice book.

Let me describe what I learned. In the discussion of a vector space $V$ over a field $K$ the terms covariant and contravariant become relevant in particular when more than one basis comes into play. Something varies (changes) either along with the change from one basis to the other — "co-", or against it — "contra-".

Let the $n$ dimensional vector space $V$ have a basis $$ \v{X} = (\vx_1,\dots,\vx_n) , $$ which in particular means that an arbitrary element $\vz\in V$ has a unique representation with regards to $\v{X}$ as \begin{equation} \vz = x_1\vx_1 +\dots+ x_n\vx_n = \sum_{i=1}^n x_i\vx_i \,, \qquad x_i\in K . \label{eq:zFromX} \end{equation} Here is a point that can easily lead to confusion, since the $n$-tuple $(x_1,\dots,x_n)$ might be called a "vector" in other contexts. But $\vz$, as an element of the vector space $V$, is a vector, while $(x_1,\dots,x_n)$ are just the coordinates of $\vz$ with respect to $\v{X}$.

We should look at the vectors in $V$ as opaque items for which we do not know any inner structure. They are no numbers, or tuples of numbers or anything we can manipulate directly. All we know about them is that we can add them and multiply them with a value from $K$ to get another one of them. The coordinates are merely a handle for the vector, a view, a representation but with a kink: they only make sense if we know the respective basis. Without reference to the basis, $(x_1,\dots,x_n)$ are not coordinates, but just a meaningless bunch of numbers.

We see how arbitrary the coordinates are as soon as we introduce another basis $$\v{Y} = (\vy_1,\dots,\vy_n),$$ different from $\v{X}$. Now the same $\vz$ has different coordinates $(y_1,\dots,y_n)$ such that \begin{equation} \vz = y_1\vy_1+\dots +y_n\vy_n = \sum_{j=1}^n y_j\vy_j . \label{eq:zFromY} \end{equation} Luckily, the two sets of coordinates are not completely arbitrary but have a relation to each other, dictated by the relation between the two bases, as can be derived as follows.

In the same way as $\vz$ is a weighted sum of basis vectors, each $\vy_j$ of the second basis is a weighted zum of the basis vectors $\vx_i$: \begin{equation} \vy_j = \mA{1}{j}\vx_1+\dots+\mA{n}{j}\vx_n = \sum_{i=1}^n \mA{i}{j}\vx_i \qquad \mA{i}{j}\in K, \, \forall j\in\{1,\dots,n\}. \label{eq:switchbase} \end{equation} So for a given $j$, the $\mA{i}{j}$ are the coordinates of $\vy_j$ with respect to basis $\v{X}$. Now we can ask how $\vz$ looks in basis $\v{Y}$ when we build each $\vy$ from the $\vx$: \begin{align} \vz &= \sum_{j=i}^n y_j\vy_j \\ &= \sum_{j=1}^n y_j\left( \sum_{i=1}^n \mA{i}{j} \vx_i\right)\\ &= \sum_{i=1}^n \left(\sum_{j=1}^n \mA{i}{j} y_j\right) \vx_i \end{align} By matching the term in parentheses with equation \eqref{eq:zFromX}, and invoking the uniqueness of coordinates given a basis, we get \begin{equation} x_i = \sum_{j=1}^n \mA{i}{j} y_j. \end{equation} Comparing this to equation \eqref{eq:switchbase}, $$ \vy_j= \sum_{i=1}^n \mA{i}{j}\vx_i , $$ we see that on the one hand the $\mA{i}{j}$ transform the basis vectors from $\v{X}$ to $\v{Y}$, while on the other hand they transform the coordinates in the opposite direction, from $\v{Y}$ coordinates to $\v{X}$ coordinates.

This is how the term contravariant comes about: the coordinates transform contravariant to the bases. And typically one just says that the coordinates are (or transform) contravariant.

The next part will show that there are also $n$-tupels that are transformed by the $\mA{i}{j}$ in the same direction as the basis and are hence called covariant.


Xfce4 Convert

Due to a long-standing bug supposedly caused by gnome 3, I decided to try out something new to replace the gnome and stumbled over xfce. I am a total convert in a day.

It turned out that the bug preventing a useful two monitor setup to survive a reboot is likely not related to gnome but rather to the Xserver and Xrandr and the like, but it looks like I'll stay with Xfce4 rather than going back to gnome.

I am used to certain configurations for many years that I do not want to unlearn for no good reason. When I set up gnome-3 last June on my Ubuntu 14.04, it was a real pain. One of the silliest things I found was that standard usage of ~/.Xmodmap is not working anymore. Neither just have the file nor adding an explicit call in ~/.xprofile seems to work anymore. I had to add an xmodmap.desktop file to ~/.config/autostart with the nasty side effect that after a hibernation the keys are not restored. No idea why it works in Xfce4, but it does.

Also many other tweaks I like where just a pain to figure out how to do in gnome-3. Having 3 desktops (or whatever they are called there) and just drag a window from one to the next — I could not get it to work. Window title bar buttons where they belong. They were on the right since they exist and they could as well be on the left, ok, but what is the point? Yes, I figured out how to move them in gnome-3, aaargh, but what a dig through the web.

Then the Jobs'ish way of having the main menu at the top of the screen. How silly is this on large screens? And it is completely useless with focus-follows-mouse. Just try it with several windows open not covering your whole screen. Again it took many searches on the web and many configuration steps to convince all applications not to use it.

Compared to this, there was much less I had to tweak in Xfce4 and, even more, there is a large menu entry called settings where one can just try out things. In gnome the settings are dumbed down. Why? And finally it felt like Xfce4 has more accessible documentation on the web then gnome, which also makes it easier to tweak everything.

This is my completely subjective opinion after going through the two configuration tasks to get everything like I like it. Others may have other experiences.

I was afraid the gnome way is the 'modern' way and I have to adapt. Well, not yet, it seems. Xfce4 to the rescue.-)

EDIT 2015-01-02: Digging after a bug in eclipse I found this rant about gtk3 and gnome3. It seems my experience with gnome3 is not as subjective as I thought.


The Java-Only Web Application (Part II)


In my previous post I described that I wanted to try a pure Java web application. In particular I wanted to avoid angle-bracket programming (aka XML-programming). And I did not want to replace the angle brackets by '@'-programming. So no JSPs or similar and no Spring.

I described already the servelet engine and how to create the HTML without JSP. Further ingredients follow now:

URL Parameters

URL parameters get into the servlet like


but there are several pitfalls to avoid when dealing with them:

  • The parameter may not exist, if only because a user has manually trimmed the URL. In this case getParameter() returns null.
  • A parameter of the given name may exist several times in the URL. This may or may not be required by the application. If it is, we would have to use getParameterValues() instead.
  • The parameter may be available but not parseable. For example an integer value is expected but the parameter does not parse as an integer.
  • Finally the parameter may be parseable but does not fit logically to other parameters or the state of the application, like a negative value when a non-negative is needed.

But this is not all there is to parameter handling. Getting them in from the URL is only one side of the coin. The other is to write them out to URLs and form parameters. To handle all this, I use a rather simple but effective class called UrlParam<T>. It is immutable and captures three pieces of information as can be seen from the constructor:

UrlParam(String name, TYPE value, ParamCodec<TYPE> pCodec)

We have the name of the parameter in the URL, a default value and a codec which translates back and forth between the value type and a string representation. Further, the class has methods to convert from the request and into a URL or a form parameter.

UrlParam<TYPE> fromFirst(ServletRequest req);
String getForUrlParam();
String getForInputParam();

The latter two take care of the necessary URL or HTML encoding on top of the conversion provided by the codec. A few example codecs which wwere written easily are for String values (a no-op), Integer values, not forgetting to catch the unchecked NumberFormatException, boolean, enums and dates.

A tricky part I am not yet completely decided on is whether I use null or a special value as the default value of a UrlParam to signal the complete absence of a value, when even a real default does not make sense. I tend more and more to never use null.

Since UrlParam is immutable, it turned out that statically initializing a template for each parameter was just right. It is then used at the start of a doGet() or doPost() to parse from a request parameter and later to fill either URLs or form parameters.

Validation of parameters can sometimes be done only piecemeal: is the parameter an integer, is it an existing user id, is the next parameter the name of a list, does the user with the id have access to this list, is the value he wants to enter valid for the list, etc. The validation needs to take into account complex dependencies of between the parameters only relevant for this particular servlet. And the way to handle errors changes depending on how far we get through the validation, which is why a simple yes/no decision is not sufficient. This is typically handled at the start of a servlet request method. Only when the control flow gets past the last validation step, either the view is created (doGet) or the model is changed (doPost).

Database Access

Currently I am using H2 as the database in embedded mode. It comes with an easy to use connection pool.

For SQL generation I use JOOQ rather than anything JPA. The reason is that the application's focus is on manipulating the contents of the underlying database, not on using it as an object store. I don't want to abstract away from the fact the my data is in a database. On the contrary, I want to use the database as a database.

In a further blog post I want to describe the security model and access rights and what I learned from implementing it.


The Java-Only Web Application (Part I)


No <Angle Brackets>

As mentioned previously I am not a big fan of JSPs. Or let me rather say that I am not a fan of XML as a programming language's syntax in general. Be it JSP, Spring configuration, Ant or XSL. In part I might add web.xml deployment descriptors. All of these are formal languages which have three things in common:

  1. XML is used as the syntax, but not quite. They all use some kind of expression language in a different syntax. Or, in the case of JSP, it is even a mix of at least three syntaxes: HTML/XML, expression languages, Java scriptlets.
  2. All of them pretend to be purely declarative, but at least three contain constructs for loops nevertheless (not sure about Spring). But even for Spring it is true that it does much more than just declaration and or configuration: it is used for full fledged programming.
  3. All of them are interpreted or compiled too late (JSP) for real type safety.

But the pendulum swings back. After years of XML programming with Spring, some time ago the big news was that Spring does without XML "configuration". And Ant is replaced in some communities with angle bracket-free languages. Replacing the angle brackets by floods of '@'-signs in my opinion is still a stupid idea, because I am convinced it can all be done in pure Java.

Trying to prove the case and as an excercise I tried to write the most often used example web application without using angle brackets: a todo list, including a security model and sharing of lists between users. It works well so far. And the ingredients are as follows.

Servlet Engine

Jetty is used as the servlet engine, in particular because it can be completely set up and configured without a single XML file. Everything goes into a main method that configures everything in simple, easy to understand, direct statements like:
    scContext.addServlet(ItemsAddServlet.class, URL_ITEMSADD);


By following the "GET-AFTER-POST" advice (also known as Post/Redirect/Get) and strictly using POST for all changes to the underlying database, servlets naturally devide themselves into two groups. POST servlets change the database to then redirect to GET servlets to display data. This very much reminds of the beaten MVC "pattern" by mapping M to the database, V to the GET-answering servlets and C to the POST-processing servlets.

Generating the HTML

Since I am not using JSP, how do I generate the HTML then? "The code must be full of terrible println() calls", I hear the complaints. Well, of course not! Just because we do not use JSP there is no reason to go completely braindead when it comes to generating HTML. My code is rather full of things like:

    HtmlPage page = pageTemplate();
    Html renderTodolist() {
      Html div = new Html("div")
      .setAttr("class", "row doneSeparator")
      return div;

The example shows that I create a simple DOM tree. It needs one interface and four classes. I refrained from implementing one class for each HTML element to enforce correct HTML attributes and structure. Given that most editors people use to create HTML do not have that either, it seemed overkill.

The nice thing about this approach is that it allows to invoke all the good habits we acquired for clean code to also use on writing HTML, which is quite difficult if the HTML is written in the language hodgepodge called JSP. Eventually the rather trivial DOM tree is send to the user's browser with

      Writer w = response.getWriter();

In the sequel blog post I will describe how to deal with URL parameters and the database.


Use Cases for ValEx

The last blog entry introduced the idea of ValEx as a variation of Java's Optional. It can be initalized not only with a value, but alternatively with an exception which explains why no value is available. The central method of this class is the getter for the contained value:

    public class ValEx<T,E extends RuntimeException> {
      public T get() throws E {
        if (t==null) throw e;
        return t;

It allows the caller to decide how sure (s)he is that a value is available. If, from our code structure, we are sure that there is a value, we can just call get() without previously checking with isPresent(). If we are wrong, this is a bug and an unchecked exception is thrown. How does this feel in practical use. Lets look at some typical exceptions in Java.

Exceptions When Parsing Strings

There are many cases where strings are parsed or converted into objects. This includes parsing dates and regular expressions, or even very simple things like getting Charset or an encoding. As already shown in the last blog entry, converting static strings should not throw a checked exception. Who has not yet written code like

      try {
        Writer w = new OutputStreamWriter(out, "UTF-8");
      } catch (UnsupportedEncodingException e) {
        log.error("how can UTF-8 be missing", e);

and wondered why the wrong character set results in a checked exception. The solution here is actually to use a slightly different call:

      Writer w = new OutputStreamWriter(out, Charset.forName("UTF-8"));

where forName() throws an unchecked exception that will only be thrown if things are getting weird. Does VarEx have an application here. Well, not as long as 100% of the cases involve static strings. But suppose the character set name is read from a property.

      String csName = System.getProperty("application.charset.name");
      Writer w = new OutputStreamWriter(out, Charset.forName(csName));

This code is now missing a try/catch, because it is not improbably that a system property contains a wrong string. In this particular case we can switch back to not using forName, but with a hypothetical factory method returning a VarEx we can have both easily. With a static string

       Writer w = Files.streamWriter(out, "UTF-8").get();

we get an exception from the get() in case we have a typo in the code. And with a character set name from a property

      String csName = System.getProperty("application.charset.name");
      VarEx<Writer,?> vw = Files.streamWriter(out, csName);
      if (vw.isEmpty()) {
        // handle the problem, possibly re-throwing
        throw w.getException();
      Writer w = vw.get();

we can first check whether we got something back. So the VarEx allows us to have both, the convenience of an unchecked exception with the minor price to pay being the unshielded get() call, as well as the awareness about a potentially unsuccessfull operation that can be checked for with isEmpty().


An IOException is probably one of the most frequent exceptions to deal with. Let's see whether VarEx can help here too. With a hypothetical factory method again, assume we had

      String readFile(String name) throws IOException {
        VarEx<Reader,IOException> vr = Files.newReader(name);
        if (vr.isEmpty()) {
          throw vr.getException();
        Reader r = vr.get();

Is this any better than the try/catch version? Probably not if used this way. What still bugs me is that an exception is prepared in newReader() and eventually thrown despite the fact that absolutely nothing exceptional is going on. Opening a file and finding that this cannot be done is completely normal business. Even if we just wrote the file, some other process or thread could have intercepted already and messed with the file in all kinds of ways that prevent us from opening it — normal, not exceptional!

In this case it may be worth considering this implementation.

    VarEx<String,String> readFile(String name) {
      VarEx<Reader,String> vr = Files.newReader(name);
      if (vr.isEmpty()) {
        return VarEx.empty(vr.getCause());
      Reader r = vr.get();

Here I start to change my mind with regard to how exactly VarEx should be implemented. In the previous blog I proposed to always use an exception as the description of what went wrong. But this forces the provider of the VarEx to create an exception with its full stack trace, even for normal business. Therefore I now rather would implement the VarEx with a completely unrestrained second generic argument.

    public class VarEx<T,Cause> {
      public static <T,Cause> ValEx<T,Cause> of(T value) {
        return new ValEx<>(value, null);
      public static <T,Cause> ValEx<T,Cause> empty(Cause cause) {
        return new ValEx<>(null, cause); 

The get() method becomes a bit more involved, but would basically throw an IllegalStateException with the Cause either as the message or as the cause of the exception.


It looks like VarEx allows to combine the benefits of checked exceptions — visibility in the api — with the convenience of unchecked exceptions thrown only when the cause is a programming problem. By allowing VarEx to store a cause also just as a message, with no exception, the expensive exception generation can be prevented where a failure to provide a value is normal and expected. Still, if it is necessary to log the case, the VarEx improves over Optional in that in can provide the cause why no value is available.

My recent experiment is an HTML app showing Open Street Map maps. It is particularly targeted on mobile devices with Javascript support for geolocation.

Here is the map.