# Group Computations in Racket

## by Markus Pfeiffer

Chris Jefferson has previously brought up the idea of making really simple implementations of some group algorithms, as such implementations make it easier to understand the core of the algorithm, and hence make it possible to explain it to students or colleagues, to oneself. In GAP one can in principle look up the implementation of a lot of algorithms, but implementations are fragmented, optimised beyond recognitions (and sometimes to fairly old standards of computing technology).

Chris Jefferson finally went ahead and implemented

For my posts I have set myself the following goals

- use a functional programming languagethey rock, and code in functional style
- use types
- keep things simple

I decided to use Racket,deal with it, I like racket, I want to learn it better, and yes I will be typing in racket too. Some technical discussion about programming languages might follow in a separate post.

## Representing groups

To do anything useful with a group on a computer, we need to represent it on a computer. In principle, anything that provides a representation of bijective maps, their inverse, and composition will do.

For the purposes of what I would like to doI am lying, I would quite like to also extend my code to deal with semigroups, but I need to focus, I envisage

Representing a permutation \pi on the set \Omega = \{1\ldots n\} for some n \in \mathbb{N} is fairly simple: just store an array of images of length (at most) n where the k-th entry is the image of k\pi.

To represent a permutation group in S_n we use a list of generators. There are already a few design decisions to take: Should we be acting on \Omega_1 = \{1\ldots n\} or \Omega_0 = \{0\ldots n-1\}? While the former is more natural for a mathematician, the latter is more natural for a computerbut it nowadays certainly is not a problem to “waste” one point. Actually while we’re at it, shouldn’t at least our surface language support acting on

Then, how many points are we expecting to act on, are our permutations dense or sparse, is there a concern about the cost of multiplication of permutations?

This is not really keeping it simple, now, is it? So we’ll be acting on \Omega_1 for the time being, and I will come back to the discussion of technical detail later.

Finite presentations first look very computer friendly, too. And they can even represent infinite groups: We represent group elements as strings over a finite alphabet, multiplication is concatenation, and formal inversion is reading the string in reverse and replacing every generator with its formal inverse.

The excitement wears off a bit when it comes to figuring out what a group that is represented this way actually

Still, for example for dealing with group homomorphisms, or certain means of verifying a Schreier-Sims stabilizer chain, it is crucial to be able to deal with finite group presentations.

I have not thought about matrix groups very much for the purposes of my endeavours. In a sense they are like permutations: matrices are easily represented as a two-dimensional k\times k array of finite field elements, they act on a well-defined object (the k-dimensional vector space over the field chosen). There is a big “but” lurking though: While matrix groups have additional structure (the action is linear after all), they are also usually huge (even when finite) and they have humonguous orbits. There are randomised algorithms to deal with matrix groups that I would like to explore, but it is not on my list of priorities to do so.