Visual acuity and type design

[Abridged]

Ernesto Peña (he/him)
4 min readMar 16, 2017

If you are old enough to read this, it is very likely that you are familiar with the Snellen chart, even if you are not part of the many people that have diminished vision of some sort (1.8% in the US, according to wolfram alpha). The Snellen chart might not be the most effective instrument to measure visual acuity anymore, but the principles behind its operation are as valid as they were around 150 years ago. Snellen charts are based on the use of individual uppercase roman characters called optotypes, which like many other design artifacts, have a fascinating history that I will not expand upon here (although Lorrie Frear has a very nice article about them in ilovetypography.com). Snellen optotypes — and virtually any other sign system developed for the same purpose — require the interplay of two almost opposite features:

  1. They must be able to measure the minimum perceptible units.
  2. They must be distinctive enough to be easily identifiable.

Opposite, because a minimum perceptible unit would have to be — understandably — as simple as possible, so simple that I would be impossible to distinguish between more than one of these units. If there is no way to compare, there is no way to assess visual acuity. Snellen achieved this by using a fixed grid based on the smallest perceptible unit. It is within this grid of 5 × 5 that all the optotypes were framed. The purpose of the optotype is to provide context to these minimal units.

The way the optotypes are used to assess visual acuity is by applying the Pythagorean equation (a²+b²=c²) to calculate the angle needed by a person to identify the optotypes (which would mean that the individual can perceive the minimum unit) at a given distance. If the person meets the “standard” angle, the vision is considered to be normal, if the angle required to identify the optotype has to be wider (the optotype has to be bigger) the vision is below normal.

Now, the interesting thing about Snellen’s system is that it can potentially work the other way around. I think it is possible to use the same principles for calculating at least some aspects of typeface legibility.

If we are to use the idea behind the optotypes for real typographic forms, we would only have to detect the minimum distance between two points critical for the identification of the sign (only, he said) and measure the entire character using this unit. This would provide us with the angle that a person with normal vision would require to perceive that minimum unit and the entire character. This number, which I call relative legibility for the lack of a better one, in combination with the distance would provide the size at which the typographic sign would keep legibility, and in combination with the size it would provide the minimum distance required to perceive it.

There are some provisions, of course. The minimum unit should be used to measure the x-height — and not the font size as suggested by ISO standards — as it has been identified as more important to the legibility of the font.

This principle can be also applied to typeface design. Not that it has not been applied, but it seems to be used in a quite intuitive fashion.

I published a paper called About Visual Acuity and Type Design: A Protocol in The Winnower that provides more details on this. You can also find the paper here. If you had not, take a look at The Winnower and feel free to provide feedback on the paper.

--

--