Euclidean rhythms as morphic words

Euclidean Rhythms is a method to generate rhythmic patterns which gives the most even way to distribute k onsets over n of beats with k < n. It was first discovered in 2004 by Godfried Toussaint [1] who described how these rhythms are found in music all over the world and presented an algorithm by Björklund [2] to generate the rhythms. There is also nice presentation of the math in [3]. All algorithms, including mine, relies on the Euclidean Algorithm, hence the name.

Toussaint’s paper is very interesting but I found the description of Björklunds algorithm a bit confusing. Looking at the the C code from Björklunds original paper is a bit better, but I think it lacks some modularity it doesn’t really explain why the algorithm works.

Here, I present an algorithm to compute Euclidean Rhythms using continued fractions and morphic words. Using these concepts, the algorithm becomes very simple, and it has the added benefit that it works not only for integers n and k but for any real numbers. It is equivalent to Björklunds algorithm but is much simpler in my opinion.

Euclidean Rhythm

We represent a rhythmic pattern as a finite sequence of 0’s and 1’s where 1 is an onset and 0 is a rest. So the first half of a classic clave pattern (which can be computed as an Euclidean Rhythm with k=3 and n=8) is written as 10010010.

Generating an Euclidean Rhythm for k onsets over n of beats can be done using a cutting sequences for a straight line with slope k / n. This means that we can define the i‘th element in the sequence as

s_i = \left\lfloor \frac{(i+1)k}{n} \right\rfloor - \left\lfloor \frac{ik}{n} \right\rfloor.

This gives a geometric interpretation of Euclidean Rhythms: The goal is to distribute the onsets as evenly as possible among the beats. Note that if you use this in the example with k=3 and n=8 we get when starting from i = 0 the sequence 01001010 which is a rotation of the desired result, and we will say sequences that are equal up to rotation are equivalent.

Prerequisites

We define the rhythmic pattern as a morphic word using a family of endomorphisms:

\theta_a(x) = \begin{cases} 0^{a-1}1 & \text{if } x = 0, \\ 0^{a-1}10 & \text{if } x = 1.
\end{cases}

Note that since these are endomorphisms, they define a mapping on a word by using them on one letter at a time. So, e.g.,

\theta_2(101) = \theta_2(1) \theta_2(0) \theta_2(1) = 01001010.

Recall that a (simple) continued fraction is a representation of a real number,

a = a_0 + \frac{1}{a_1 + {\frac{1}{a_2 + \frac{1}{a_3 + \cdots}}}}

which is written a = [a_0; a_1, a_2, \ldots]. All numbers has a representation as a continued fraction, and the representation is finite if and only if it represents a rational numbers. Computing the continued fraction of a rational number is equivalent to the Euclidean Algorithm on the numerator and denominator.

The algorithm

Using these two concepts, we can define a very simple algorithm for generating an Euclidean Rhythm:

  1. Write \frac{k}{n} = [0; a_1, a_2, \ldots, a_m] as a continued fraction,
  2. Return \theta_{a_m} \circ \theta_{a_{m-1}} \circ \ldots \circ \theta_{a_1} (0).

We assume that n and k are coprime – otherwise we can reduce and simply repeat the pattern.

For the example above with k = 3 and n = 8, we get that \frac{3}{8} = [0; 2, 1, 2], so we get the Euclidean Rhythm using the morphisms in order as

0 \overset{\theta_2}{\rightarrow} 01 \overset{\theta_1}{\rightarrow} 110 \overset{\theta_2}{\rightarrow} 01001001

which is a rotation of the sequence we found above.

Non-rational patterns — a Golden rhythm

Since any number has a continued fraction representation, the above algorithm also makes sense if \frac{k}{n} is not rational. The musical interpretation of this is not as clear since it requires an infinite number of beats, but the recurrent nature of the algorithm may give a hint to what is going on.

Consider the inverse golden ratio which has continued fraction representation

\frac{-1 + \sqrt{5}}{2} = [0; 1,1,\ldots].

Since this has an infinite continued fraction expansion the above algorithm wont work directly, but if we set some limit, say m, and considers only the first m terms it gives a rational approximation which gets better as m grows. This approximation will be equal to the Euclidean Rhythm which distributes F_{m-1} onsets over F_m beats where F_m is the m‘th Fibonacci number, and will be equal to the prefix of length F_{m} of the Fibonacci word.

Source code

I have written some code in Rust which tests the equivalence of constructing Euclidean Rhythms as cutting sequences, from Björklunds Algorithm and using my algorithm.

References

[1] https://cgm.cs.mcgill.ca/~godfried/publications/banff.pdf
[2] https://www.semanticscholar.org/paper/The-Theory-of-Rep-Rate-Pattern-Generation-in-the-Bjorklund/c652d0a32895afc5d50b6527447824c31a553659
[3] https://erikdemaine.org/papers/DeepRhythms_CGTA/paper.pdf

Sampling random derangements

Recall that a derangement is a permutation without fixed points. The problem of computing the number of such derangements on a set of size n is also known as the subfactorial, denoted !n and was first solved by Pierre Raymond de Montmort in 1713. It is now known that

!n = \left\lfloor \frac{n!}{e} \right\rfloor

which is the source of the joke about that the probability that none of n drunk mathematicians get their own coat after a dinner party is 1/e.

A simple way to sample a random derangement is use rejection sampling and sample random permutations until you get a derangement, e.g. using the Fisher-Yates shuffle. So on average, e samplings will be sufficient to get a derangement.

A nicer way to do this, which avoids the rejection sampling, is presented in “Generating Random Derangements” by Martínez, Panholzer and Prodinger. This method has complexity similar to that of the Fisher-Yates shuffle and is simple to implement. Here, I present a slight variation of Martínez et.al’s algorithm with two improvements:

  • It avoids a rejection sampling on the index to swap with. I found this a bit annoying since one of the reasons for not using Fisher-Yates is to avoid rejection sampling.
  • By using a different data structure for the marked elements, I avoid one of the nested if statements.

The complexity and performance should be about the same as the original algorithm but it is slightly more elegant in my opinion.

The function looks as follows in Rust. It assumes that there is a function, subfactorials, which computes the subfactorials from 0 to n as described above:

pub fn sample_derangement(n: usize) -> Vec<usize> {
    let mut rng = rand::rng();

    let d = subfactorials(n);

    let mut permutation = (0..n).collect::<Vec<usize>>();
    let mut unmarked = (0..n).collect::<Vec<usize>>();

    let mut u = n - 1;
    while u > 0 {
        let i = unmarked.pop().unwrap();
        let j = rng.random_range(0..unmarked.len());
        permutation.as_mut_slice().swap(i, unmarked[j]);
        if rng.random_bool(d[u - 1] as f64 * u as f64 / d[u + 1] as f64) {
            unmarked.remove(j);
            u -= 1;
            if u == 0 {
                break;
            };
        };
        u -= 1;
    }
    permutation
}

Evaluate a polynomial on points in arithmetic progression

I recently ran into a situation where I needed a fast way to evaluate a polynomial on inputs 1,...,n where n. It turns out that there is a way of doing this quicker than just computing the evaluation using something like Horner’s method on each input, assuming that n is sufficiently larger than the degree of the polynomial. As with many other ingenious algorithms, this one can be found in Donald Knuth’s “The Art of Computer Programming”, where it’s mentioned briefly in section 4.6.4 and left as an exercise, and it gives something more general, namely a way to compute evaluations on points in arithmetic progression. Here I give some more details about the algorithm and provide an implementation in Rust.

Let f(x) = f_0 + f_1x + \cdots + f_d x^d be a polynomial in some ring and let x_0 and h define an arithmetic progression, x_0, x_0 + h, x_0 + 2h, \ldots. There is a pre-computation step before we can get to the actual evaluation: First we compute the evaluations of the first d+1 elements of the arithmetic progression:

y_j = f(x_0 + jh), \quad j = 0,\ldots,d.

Since this gives us the first d+1 evaluations, this proves that the method is not quicker if we need fewer points than d+1. To complete the pre-processing, we let for all k=1,\ldots,d and j=d,\ldots,k (going down)

y_j = y_j - y_{j-1}.

Now, y_0 holds the evaluation of the first value in the arithmetic progression, i.e. f(x_0), and we can implement the evaluation as an iterator. To get the next evaluation, let for j = 0,\ldots, d-1,

y_j = y_j + y_{j+1},\qquad (1)

and yield y_0 as the next evaluation. Notice that in this step, an evaluation can be computed with d additions where a naive evaluation using Horner’s method requires d additions and d multiplications.

There are a few optimisations that can be implemented for this algorithm:

  1. Since the first d+1 evaluations are computed in the preprocessing, we can store these and yield them directly. This is a very small optimisation since the iteration step (1) has to be computed anyway, and it gives larger state for the iterator.
  2. Notice that for a single iteration, y_0 only depends on y_0 and y_1, so for the last iterations of this iterator, some of the steps in (1) can be skipped.

The algorithm implemented in Rust is available on GitHub.

Per Nørgaard’s Infinity Series computed iteratively

The danish composer Per Nørgaard (1932-2025) invented a serial composition method based on a sequence of integers called the “Infinity Series”. The series is most famously used in the symphony “Voyage into the Golden Screen”.

Mathematically speaking, the series is actually a sequence, and is simple to define, e.g. by

a_0 = 0 \\
a_{2n} = -a_n \\
a_{2n + 1} = a_n + 1

There are many nice articles online explaining the properties of the sequence and many ways to derive it, so we won’t go into that here. Instead we will present a way to compute the sequence iteratively based on the following equivalent formula:

a_0 = 0 \\
a_{k+1} = (-1)^{\nu_2(k)} (\nu_2(k) + 1 - a_k)

Here, \nu_2(k) is the 2-adic valuation of k which can be computed efficiently using a count trailing zeros (CTZ) function (e.g. trailing_zeros in Rust). This gives a way to compute the sequence as an iterator. In Rust, and implementation would look something like this:

pub struct NørgårdsInfinitySequence {
    previous: i64,
    index: u64,
}

impl NørgårdsInfinitySequence {
    pub fn new() -> Self {
        NørgårdsInfinitySequence {
            previous: 0,
            index: 0,
        }
    }
}

impl Iterator for NørgårdsInfinitySequence {
    type Item = i64;

    fn next(&mut self) -> Option<Self::Item> {
        if self.index > 0 {
            let v = self.index.trailing_zeros() as i64;
            self.previous = v + 1 - self.previous;
            if v.is_odd() {
                self.previous = -self.previous;
            }
        }
        self.index += 1;
        Some(self.previous)
    }
}

Angles in Lego

Introduction

Most Lego instructions put bricks aligned with the grid, but a few of them uses bricks at angles, for example the Corner Garage. If we look at a Lego board as a coordinate system, and assume that the brick we want to put at an angle has one end in (0,0), then it is possible to end it at a stud at (a,b) if there is an integer c such that a^2 + b^2 = c^2. This is known as a Pythagorean triple, and it is known that there are infinitely many of these. The smallest one, which can also be used in Lego is (3,4,5), so a 6 stud piece can be put at an angle from a stud to the stud 4 studs to the right and 3 studs above. Note that we need a 6 stud piece to cover a distance of 5.

Near Pythagorean triples

There are only a few small Pythagorean triples usable in Lego construction, where the longest brick is 16 studs long. However, in the Corner Garage mentioned earlier, they use a triple (12, 12, 17) to produce a 45° angle, but this is not a Pythagorean triple since \sqrt{12^2 + 12^2} = 16.9706 \neq 17. The error (distance to nearest stud) for this Lego approved triple is ~0.03, so we use this as an upper bound on the triples computed here.

We may extend the list of triples further because there are Lego pieces which allow us to put a brick half-way between studs, so we do not need to restrict ourselves to integers but may include half-integers too. If we restrict the coordinates to be at most 15 units long, there are 71 triples. Sorted by angle they are:

\begin{align*}
1.9°&: (15, 0.5, 15),\newline
2°&: (14.5, 0.5, 14.5),\newline
2°&: (14, 0.5, 14),\newline
2.1°&: (13.5, 0.5, 13.5),\newline
2.2°&: (13, 0.5, 13),\newline
2.3°&: (12.5, 0.5, 12.5),\newline
2.4°&: (12, 0.5, 12),\newline
2.5°&: (11.5, 0.5, 11.5),\newline
2.6°&: (11, 0.5, 11),\newline
2.7°&: (10.5, 0.5, 10.5),\newline
2.9°&: (10, 0.5, 10),\newline
3°&: (9.5, 0.5, 9.5),\newline
3.2°&: (9, 0.5, 9),\newline
3.4°&: (8.5, 0.5, 8.5),\newline
3.6°&: (8, 0.5, 8),\newline
3.8°&: (7.5, 0.5, 7.5),\newline
4.1°&: (7, 0.5, 7),\newline
4.4°&: (6.5, 0.5, 6.5),\newline
4.8°&: (6, 0.5, 6),\newline
5.2°&: (5.5, 0.5, 5.5),\newline
5.7°&: (5, 0.5, 5),\newline
6.3°&: (4.5, 0.5, 4.5),\newline
14.9°&: (15, 4, 15.5),\newline
15.6°&: (12.5, 3.5, 13),\newline
16.3°&: (12, 3.5, 12.5),\newline
16.9°&: (11.5, 3.5, 12),\newline
18.4°&: (9, 3, 9.5),\newline
19.4°&: (8.5, 3, 9),\newline
20.1°&: (15, 5.5, 16),\newline
20.8°&: (14.5, 5.5, 15.5),\newline
22.6°&: (6, 2.5, 6.5),\newline
22.6°&: (12, 5, 13),\newline
25.3°&: (9.5, 4.5, 10.5),\newline
25.7°&: (13.5, 6.5, 15),\newline
26.6°&: (4, 2, 4.5),\newline
27.6°&: (11.5, 6, 13),\newline
28.1°&: (7.5, 4, 8.5),\newline
28.1°&: (15, 8, 17),\newline
28.6°&: (11, 6, 12.5),\newline
30°&: (13, 7.5, 15),\newline
30.1°&: (9.5, 5.5, 11),\newline
31°&: (15, 9, 17.5),\newline
32.5°&: (5.5, 3.5, 6.5),\newline
33.7°&: (7.5, 5, 9),\newline
33.7°&: (15, 10, 18),\newline
34.4°&: (9.5, 6.5, 11.5),\newline
34.8°&: (11.5, 8, 14),\newline
35.1°&: (13.5, 9.5, 16.5),\newline
36.9°&: (2, 1.5, 2.5),\newline
36.9°&: (4, 3, 5),\newline
36.9°&: (6, 4.5, 7.5),\newline
36.9°&: (8, 6, 10),\newline
36.9°&: (10, 7.5, 12.5),\newline
36.9°&: (12, 9, 15),\newline
36.9°&: (14, 10.5, 17.5),\newline
38.4°&: (14.5, 11.5, 18.5),\newline
38.7°&: (12.5, 10, 16),\newline
39°&: (10.5, 8.5, 13.5),\newline
39.5°&: (8.5, 7, 11),\newline
39.8°&: (15, 12.5, 19.5),\newline
40.2°&: (6.5, 5.5, 8.5),\newline
40.2°&: (13, 11, 17),\newline
41.6°&: (4.5, 4, 6),\newline
41.9°&: (14.5, 13, 19.5),\newline
42.6°&: (12.5, 11.5, 17),\newline
43°&: (15, 14, 20.5),\newline
43.6°&: (10.5, 10, 14.5),\newline
45°&: (6, 6, 8.5),\newline
45°&: (8.5, 8.5, 12),\newline
45°&: (12, 12, 17),\newline
45°&: (14.5, 14.5, 20.5),\newline
\end{align*}

Note that only angles up to 45 degrees are included here, since larger angles can be constructed from symmetry. Below are pictures of two examples of the triples (7.5, 4, 8.5) (left) and (6, 2.5, 6.5) .

Vertical angles

The above example is for putting bricks horizontally on a plate. However, some bricks, like Lego Technic bricks or the “light holder” bricks, allows you to build vertical angles as well, because you can put a brick on the side of it.

The height of a brick and the with of one stud is, however, not equal, so the Pythagorean triples have to be computed differently. We still allow the horizontal axis to be integers and half-integers, as above, but for the vertical axis, we use the unit size of one place. If you’ve ever built with Legos, you probably know that three plates equals one brick, but it is also true that that five plates equal two studs. So the if we keep using one horizontal stud as a unit, the vertical coordinate must be a multiple of \frac{2}{5}. This gives the following list of 33 near Pythagorean triples:

\begin{align*}
2.3°&: (10, 0.4, 10),\newline
2.4°&: (9.5, 0.4, 9.5),\newline
2.5°&: (9, 0.4, 9),\newline
2.7°&: (8.5, 0.4, 8.5),\newline
2.9°&: (8, 0.4, 8),\newline
3.1°&: (7.5, 0.4, 7.5),\newline
3.3°&: (7, 0.4, 7),\newline
3.5°&: (6.5, 0.4, 6.5),\newline
3.8°&: (6, 0.4, 6),\newline
4.2°&: (5.5, 0.4, 5.5),\newline
4.6°&: (5, 0.4, 5),\newline
5.1°&: (4.5, 0.4, 4.5),\newline
5.7°&: (4, 0.4, 4),\newline
6.5°&: (3.5, 0.4, 3.5),\newline
7.6°&: (3, 0.4, 3),\newline
17.7°&: (10, 3.2, 10.5),\newline
18.6°&: (9.5, 3.2, 10),\newline
19.3°&: (8, 2.8, 8.5),\newline
20.5°&: (7.5, 2.8, 8),\newline
23.6°&: (5.5, 2.4, 6),\newline
26.1°&: (9, 4.4, 10),\newline
26.6°&: (4, 2, 4.5),\newline
28.1°&: (7.5, 4, 8.5),\newline
30.5°&: (9.5, 5.6, 11),\newline
31°&: (6, 3.6, 7),\newline
34.4°&: (7, 4.8, 8.5),\newline
35.4°&: (4.5, 3.2, 5.5),\newline
36.9°&: (8, 6, 10),\newline
38.7°&: (3.5, 2.8, 4.5),\newline
38.7°&: (9, 7.2, 11.5),\newline
40.4°&: (8, 6.8, 10.5),\newline
41.6°&: (4.5, 4, 6),\newline
42.4°&: (7, 6.4, 9.5),\newline
\end{align*}

Below is a picture of a usage of the triples (6, 3.6, 7) and (7.5, 2.8, 8). Recall that 0.4 on the vertical axis corresponds to one plate and 1.2 is the full height of a brick.

Source code

The source code used to compute the triples is available here.

Dynamically Adaptive Tuning

Background

It has been known since Pythagoras that musical intervals correspond to a whole-number ratio of frequencies between the notes. The ratios for the 12 intervals in a chromatic scale are given below.

IntervalRatio
Unison1
Semitone\frac{16}{15}
Major second\frac{9}{8}
Minor third\frac{6}{5}
Major third\frac{5}{4}
Fourth\frac{4}{3}
Tritone\frac{45}{32}
Fifth\frac{3}{2}
Minor sixth\frac{8}{5}
Major sixth\frac{5}{3}
Minor seventh\frac{9}{5}
Major seventh\frac{15}{8}
Octave2

The problem is that these cannot be used as basis for for tuning an instrument because the intervals do not add up to what you would expect. As an example, you would expect that two major seconds, which is (\frac{9}{8})^2 = \frac{81}{64} would give the same ratio as a major third, but this is clearly not the case. Famously, the difference between 12 fifths and 7 octaves, which should be the same, is known as the Pythagorean comma:

\frac{(\frac{3}{2})^{12}}{2^7} \approx 1.01364.

One could fix a base note and then use the intervals from the table above to tune all keys on a piano, but because the notes are not evenly spaces, this will not allow transposition – a melody will sound different depending on what key it is played in. The most common solution to this problem is to use equal temperament which divides the octave into 12 equally sized ratios. This permits transposition but has the caveat that all intervals will only be approximately equal to their ideal ratio.

The equal temperament has been the most commonly used tuning system for centuries, but there have been some suggestions on how to improve the intonation of instruments, especially after the invention of electronic instruments. A recent example is presented in [1], where a method to compute just tuning in real-time using the method of least-squares is presented, and there is also a brief historical overview of other approaches.

Below, I present a method for optimal tunings which is very similar to the one in [1], but it is a lot simpler to understand and implement and I believe the resulting tuning is very similar. The idea behind this method is simply to use just tuning from a based on the lead/melody line. To describe it, we first need to define some formalism about tuning systems.

Tuning systems

A tuning system may be described as a strictly increasing function \tau: \mathbb{Z} \to (0, \infty) such that \tau(0) = 1. The equal tuning can, for example, be described by the relative tuning function

\tau_{\text{12-ET}}(n) = 2^{\frac{n}{12}}.

As an example of how to use this function, we first pick a base, for example that note n = 0 corresponds to A4, so it has frequency 440 Hz. The note a fifth (seven semi-tones) above then has frequency 440 Hz \times \tau_{\text{12-ET}}(7) = 659.255 Hz.

Given a base note, the just intervals in the table above defines a tuning \tau_{\text{Just}} because they define the tuning of the notes 0, \ldots, 12, and all other values may then be derived by the moving up and down in octaves, e.g. \tau_{\text{Just}}(n + 12) = 2 \tau_{\text{Just}}(n) .

We represent a score with v monophonic parts as a matrix of size v \times l with integer entries. The first row is assumed to be the leading part and l is the length of the score. The score is discretized such that a single column corresponds to the shortest note duration in the score and longer notes are represented by spanning multiple columns. This formalism cannot capture neither repeated notes nor rests, but since we are only interested in harmony, it will suffice for our purpose.

As an example, consider the following arrangement of the first four bars of Air on the G String by J. S. Bach / August Willhelmj.

Using standard MIDI numbers for notes where A4 corresponds to 69, this may be represented by the following matrix:

\left(\begin{matrix}
76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 76, 81, 77, \cdots \\
67, 67, 67, 67, 67, 67, 67, 67, 72, 72, 72, 72, 72, 72, 72, 72, 69, 69, 72, 72, \ldots \\
60, 60, 60, 60, 59, 59, 59, 59, 57, 57, 57, 57, 55, 55, 55, 55, 53, 53, 53, 53, \ldots \\
\end{matrix}\right)

which we will denote A = (a_{ij}). The goal of the tuning is to translate each of these notes a_{ij} into a frequency f_{ij} \in (0, \infty). This can be done using the equal tuning by setting f_{ij} = 440 \times \tau_{\text{12-ET}}(a_{ij} - 69) for all i, j. This will result in the following:

Dynamically adaptive tuning system

To get pure intervals we instead use the following method: Assume that frequencies f_{1,1}, \ldots, f_{1,L} for the first row has been determined. Now, the frequencies for row i is defined by

f_{ij} = f_{1j} \tau_{\text{Just}}(a_{ij} - a_{1j}).

This ensures that all intervals between the first and the j‘th row are just.

In order to tune the first row we can either use equal tuning, but to ensure just step intervals in the lead we instead pick a frequency for the first note, in this case f_{1,1} = 660 because the first note in the lead is A5, and define

f_{1,j} = f_{1,j-1} \tau_{\text{Just}}(a_{1,j} - a_{1,j-1}).

for j > 1. The resulting arrangement with dynamically adaptive just tuning as described above sounds like this.

The tuning is dynamical, meaning that the same note played in different places in the score may be tuned to different frequencies. This may even be true for sustained notes, if the melody changes in the duration of the sustained note. As an example, the F# half-note played by the third voice in the second half of the third bar changes frequency for each 8th note duration, and is tuned as 183.33, 182.52, 183.33 and 182.52 Hz resp. in the duration of the half note.

As it is presented here, the method may be applied to a fixed score where all notes are known in advance, but it could also be used in real-time assuming the program performing the tuning has a well-defined way of determining the lead, for example the highest played note, and then use this as the base.

The code used to generate the sound clips is available on GitHub.

Literature

[1] K. Stange, C. Wick and H. Hinrichsen, “Playing Music in Just Intonation: A Dynamically Adaptive Tuning Scheme,” in Computer Music Journal, vol. 42, no. 3, pp. 47-62, Oct. 2018, doi: 10.1162/comj_a_00478.

Compute the bit-length of an integer from bitwise shifts

I recently had to write a simple algorithm that computes the bit-length of an integer (the number of digits in the binary expansion of the integer) given only bitwise shifts and comparison operators. It is simple to compute the bit-length in linear time in the number of digits of the integer by computing the binary expansion and counting the number of digits, but it is also possible to do it in logarithmic running time in an upper bound for the bit-length of the integer. However, I wasn’t able to find such an algorithm described anywhere online so I share my solution here in case anyone else run into the same problem.

The idea behind the algorithm is to find the bit-length of an integer n \geq 0 using binary search with the following criterion: Find the unique m such that n \gg m = 0 but n \gg (m - 1) = 1 where \gg denotes a bitwise right shift. Note that m is the bit-length of n. Since the algorithm is a binary search, the running time is logarithmic in the maximal length of n.

Below are both a recursive and an iterative solution written in Java. They should be easy to translate to other languages.

Recursive solution

public static int bitLength(int n, int maxBitLength) {
    if (n <= 1) {
        return n;
    }
    int m = maxBitLength >> 1;
    int nPrime = n >> m;
    if (nPrime > 0) {
        return m + bitLength(nPrime, maxBitLength - m);
    }
    return bitLength(n, m);
}

Iterative solution

public static int bitLength(int n, int maxBitLength) {
    if (n <= 1) {
        return n;
    }
    int length = 1;
    while (maxBitLength > 1) {
        int m = maxBitLength >> 1;
        int nPrime = n >> m;
        if (nPrime > 0) {
            length += m;
            n = nPrime;
            maxBitLength = maxBitLength - m;
        } else {
            maxBitLength = m;
        }
    }
    return length;
}

High resolution fractal flames

Fractal flames are a type of iterated function systems invented by Scott Draves in 1992. The fixed sets of fractal flames may be computed using the chaos game (as described in an earlier post), and the resulting histogram may be visualised as beautiful fractal-like images. If the histogram also has a dimension storing what function from the function system was used to get to a particular point, it may even be coloured.

There are a lot of software available to generate fractal flames, and I have built yet another one focussed on generating very high resolution images for printing. The image below has resolution 7087 x 7087 and been generated after about 4 hours of computation on a laptop. It is free to use under a Creative Commons BY-NC 4.0 license.

References

Scott Draves & Eric Reckase (2003), The Fractal Flame Algorithm, https://flam3.com/flame_draves.pdf

Using the “Boids” algorithm to compose music

Simulating flocks of birds

The Boids (short for “bird-like-objects”) algorithm was invented by Craig W. Reynolds in 1986 to simulate the movement of flocks of birds. One of the key insights was that a bird in a flock may be simulated using three simple rules:

  1. Collision Avoidance: avoid collisions with nearby flockmates
  2. Velocity Matching: attempt to match velocity with nearby flockmates
  3. Flock Centering: attempt to stay close to nearby flockmates

Using these simple rules, a very realistically looking simulation of flocks of birds may be implemented. It is usually done in two or three dimensions, but may in principle be done in any space where direction and distance can be defined meaningfully, for example a normed vector space.

This picture is from animated simulation of a flock of birds generated using this software: https://github.com/jonas-lj/Boids. Besides the three rules from Reynolds’ paper, we have in this simulation added a red bird, a predator, which the other birds attempts to avoid.

Using the simulation to compose music

We have created a special-purpose version of the boids algorithm to compose music. In this, we limit the boids to move around in one dimension but use the same rules as in simulations in two or three dimensions. Each boid corresponds to a voice and the one-dimensional space is mapped to a set of notes in a diatonic scale. The rules are adjusted such that the collision avoidance prefers a distance of at least a third, but since there are other rules influencing the movement, and only one dimension to move around in, the boids collide often. We also add rules to keep the boids in a certain range. Left on their own, the boids tend to land in an equilibrium, so to ensure an interesting dynamic, we let one boid move randomly with its velocity changing according to a brownian motion (Boid 0 in the plot below) as well as the other rules, the boids are following. With four boids this gives a result as shown below:

A plot of a simulation of 128 iterations of four boids. The velocity of Boid 0 are changing both according to the rules but also randomly.

The voices are mapped to notes (MIDI-files) and was used in the recording all three tracks of the album “Pieces of infinity 02” by Morten Bach and Jonas Lindstrøm. The algorithm was changed slightly, for example to allow some boids to move faster than others, or to allow more boids, but the base algorithm is the same used to generate all voices for the three tracks.

The first seven bars of the four voices plotted above mapped to notes.

References

Reynolds, Craig (1987): Flocks, herds and schools: A distributed behavioural model. SIGGRAPH ’87: Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, https://team.inria.fr/imagine/files/2014/10/flocks-hers-and-schools.pdf

Lindstrøm, Jonas (2022): boid-composer, https://github.com/jonas-lj/boid-composer

On the creation of “The Nørgård Palindrome”

The Nørgård Palindrome is an ambient electronic music track released recently by Morten Bach and me. It is composed algorithmically and recorded live in studio using a lot of synthesizers. It is the second track of the album, the first being “Lorenz-6674089274190705457 (Seltsamer Attraktor)” which was described in another post.

The arpeggio-like tracks in The Nørgård Palindrome is created from an integer sequence first studied by the danish composer Per Nørgård in 1959 who called it an “infinite series”. It may be defined as

\begin{aligned}
a_0 &= 0, \\
a_{2n} &= -a_n, \\
a_{2n + 1} &= a_n + 1.
\end{aligned}

The first terms of the sequence are

0, 1, -1, 2, 1, 0, -2, 3, -1, 2, 0, 1, 2, -1, -3, 4, 1, 0, \ldots

The sequence is interesting from a purely mathematical view point, which has been studied by several authors, for example by Au, Drexler-Lemire & Shallit (2017). Considering only the parity of the sequence yields the Thue-Morse sequence, which is a famous and well-studied sequence.

However, we will, as Per Nørgård, use the sequence to compose music. The sequence is most famously used in the symphony “Voyage into the Golden Screen”, where Per Nørgård mapped the first terms of the sequence to notes by picking a base note corresponding to 0 and then map an integer k to the note k semitones above the base note.

In The Nørgård Palindrome, we do the same, although we use a diatonic scale instead of a chromatic scale, and get the following notes when using a C-minor scale with 0 mapping to C:

It turns out that certain patterns are repeated throughout the sequence, although sometimes transposed, which makes the sequence very usable in music.

In the video below we play the first 144 notes slowly along while showing the progression of the corresponding sequence.

The first 144 notes of Nørgårds’ infinite series mapped to notes in a diatonic scale.

In The Nørgård Palindrome, we compute a large number of terms, allowing us to play the sequence very fast for a long time, and when done, we play the sequence backwards. This voice is played as a canon in two, and the places where the voices are in harmony or aligned emphasises the structure of the sequence.

The recurring theme is also composed from the sequence using a pentatonic scale and played slower.

The code use to generate the sequence and the MIDI-files used on the track is available on GitHub. The track is released as part of the album pieces of infinity 01 which is available on most streaming services, including Spotify and iTunes.