Texte mis à jour le 27 février 2024
Dans ce texte, nous allons discuter des différentes manières de faire (et de ne pas faire) l'horaire d'un pool de 4 équipes dans un tournoi d'ultimate. Pour simplifier les choses, nous allons supposer que deux terrains sont disponibles pour faire jouer les 4 équipes en même temps et que tous les matchs du pool seront jouées sur une même journée. On suppose aussi qu'il existe un préclassement des équipes de 1 à 4 (1 étant la meilleure équipe, et 4 étant l'équipe présumée la plus faible).
Un peu de théorie
Tout d'abord, faisons un peu de théorie. Nous désirons respecter les trois Principes de base suivants:
- Principe no 1: Terminer avec les matchs les plus importants
- Principe no 2: Commencer avec les matchs les moins importants
- Principe no 3: Terminer avec le match le plus important
Pourquoi veut-on ces principes? Simplement parce que les équipes ne sont pas à leur meilleur potentiel en début de journée. Il est préférable de mesurer les équipes impliquées dans un match important le plus possible en fin de journée ou fin de pool.
Les matchs les plus importants sont les matchs entre des équipes préclassées proches une de l'autre (1 v 2, 2 v 3, 3 v 4). Les matchs les moins importants sont les matchs entre équipes préclassées loin une de l'autre (1 v 3, 2 v 4, 1 v 4).
Aussi, on dira que le match le plus important du pool sera celui qui fait intervenir la dernière équipe à être sélectionnée pour l'étape suivante et la meilleure équipe qui se fait éliminer du championnat. Par exemple, si seules les deux meilleures équipes passent en quart-de-finale (comme en Coupe du monde de football), alors le match le plus important du pool est le match 2 v 3. Si seule la meilleure équipe du pool est sélectionnée pour l'étape suivante, alors le match le plus important est le match 1 v 2. Si seule la dernière équipe est éliminée du championnat après le pool (comme pendant les Championnats canadiens d'ulimate), alors le match 3 v 4 est le match le plus important du pool.
Attention avant de vouloir adapter les principes discuté ici pour des pools plus grands impliquant 6 équipes et plus où les matchs seront répartis sur plus d'une journée. La question se pose alors de savoir si la priorité est de faire jouer les matchs les plus importants à la fin du pool ou à la fin de la première journée. Cela est un autre sujet: continuons plutôt de considérer le cas d'un pool de 4 équipes.
Option 1
La première option à laquelle on peut penser est de faire jouer les équipes dans l'ordre suivant. On se met dans la peau de la meilleure équipe, et on veut la faire jouer contre des équipes de plus en plus forte au fur et à mesure que la journée avance:
Ronde | Terrain 1 | Terrain 2 |
---|---|---|
Ronde 1 | 1 v 4 | 2 v 3 |
Ronde 2 | 1 v 3 | 2 v 4 |
Ronde 3 | 1 v 2 | 3 v 4 |
Sans savoir quel est le match le plus important, on peut déjà dire que c'est déjà un très mauvais choix de format pour deux raisons:
- (Principe no 1 non respecté) Le match 2 v 3 est un match important et ne devrait pas être joué pour commencer le pool.
- (Principe no 2 non respecté) Les matchs 1 v 3 et 2 v 4 sont moins importants et devraient être joués en début de pool.
C'est ce format que la FFFD a choisi comme format de tournoi pour les pools de 4 équipes pour les Championnat de France Beach Mixte N1 N2 et N3 du 23-24 septembre 2023. Le format de la FFFD est fait de 4 pools de 4 équipes et seuls les top 2 de chaque pool sont qualifiées pour l'étape suivante (quart-de-finales). Cela signifie que le match le plus important est le match 2 v 3, qui devrait donc être joué à la fin du pool (Principe no 3 non respecté). Or, c'est le match qui est joué en premier le samedi matin. Bref, ce n'est pas idéal, car le format décide avec haute probabilité qui va jouer dans le top 8 au tout premier match du samedi matin alors que les équipes ne sont pas à leur plein potentiel.
Option 2
L'option classique d'un pool de 4 équipes est le format suivant:
Ronde | Terrain 1 | Terrain 2 |
---|---|---|
Ronde 1 | 1 v 3 | 2 v 4 |
Ronde 2 | 1 v 4 | 2 v 3 |
Ronde 3 | 1 v 2 | 3 v 4 |
On a que
- Le Principe 2 est respecté, car on commence avec les matchs les moins importants.
- Le Principe 1 est respecté, car on termine avec les matchs les plus importants (2 v 3, puis 1 v 2 et 3 v 4).
- Le Principe 3 est respecté si le match le plus important est le match 1 v 2 ou 3 v 4.
C'est le format de base recommandé par USA Ultimate pour 4 équipes dans le manuel des formats de tournois d'ultimate qui existe depuis 1993.
Observons que l'option 2 est compatible avec l'existence d'une 4e ronde de matchs de croisement après la phase de pools et avant la phase à élimination directe. Il s'agit de croiser les équipes terminant en 2e et 3e position dans les pools distincts. Par exemple, avec quatre pools A-B-C-D, les matchs de croisement (aussi appelés pré-quarts dans ce cas-ci) sont A2 v D3, A3 v D2, B2 v C3, B3 v C2 dont les gagnants se retrouvent en quart-de-finales contre les gagnants des pools.
Les matchs de croisement permettent de gérer la situation où on a trois équipes fortes qui se retrouvent dans le même pool. Le match de croisement permet à une équipe qui termine 3e d'un pool de tenter de gagner contre une équipe qui a terminé 2e dans un autre pool pour se qualifier pour l'étape suivante.
Avec des matchs de croisement 2e v 3e, seule la 4e équipe est éliminée à la fin des matchs de pool. Le match 3 v 4 du pool est donc le match le plus important. Cette situation est donc compatible avec l'option 2 (Principe no 3 respecté).
Option 3
Lorsque seules 2 équipes sur 4 sont qualifiées pour l'étape suivante, alors le match 2 v 3 devient le match le plus important du pool. En ce sens, l'option 2 ci-haut ne respecte pas le Principe no 3, car le match 2 v 3 est joué en deuxième ronde.
Dans ce cas, il est préférable de procéder ainsi:
Ronde | Terrain 1 | Terrain 2 |
---|---|---|
Ronde 1 | 1 v 3 | 2 v 4 |
Ronde 2 | 1 v 2 | 3 v 4 |
Ronde 3 | 1 v 4 | 2 v 3 |
On a que:
- Le Principe 2 est respecté, car on commence avec les matchs les moins importants.
- Le Principe 1 est quasiment respecté, car on termine avec les matchs les plus importants (1 v 2 et 3 v 4 puis 2 v 3).
- Le Principe 3 est respecté, car on termine avec le match le plus important (2 v 3).
C'est ce format que la FIFA choisit lors de la Coupe du monde pour les pools de 4 équipes où le match 2 v 3 est le match le plus important du pool. Voici une copie écran de la page Wikipédia sur la Coupe du monde de football 2026:
Option 4
Il existe une 4e option proposée dans le manuel de USA Ultimate qui mérite d'être mentionnée:
Ronde | Terrain 1 | Terrain 2 |
---|---|---|
Ronde 1 | 1 v 3 | 2 v 4 |
Ronde 2 | gagnant v perdant | gagnant v perdant |
Ronde 3 | match restant | match restant du tournoi à la ronde |
Lorsque le préclassement est incertain, ce format augmente les chances que les équipes invaincues se rencontrent en troisième ronde. Si le préclassement est respecté dans les matchs, alors l'option 4 est équivalente à l'option 2.
Conclusion
Si vous organisez un tournoi, merci de ne pas réinventer la roue. Les formats de tournois ont été beaucoup étudiés dans le passé (USA Ultimate, FIFA, etc.). Merci de consulter par exemple le manuel des formats des tournois de USA Ultimate ou ce qui se passe dans les autres fédérations et à l'international.
Pour ce qui est de l'horaire des pools de 4 équipes, voici mes recommandations
]]>
- si une ou trois équipes du pool sont qualifiées pour l'étape suivante, alors je recommande d'utiliser l'option 2: le format classique recommandé dans le manuel des formats de tournoi de USA Ultimate.
- si deux équipes du pool sont qualifiées pour l'étape suivante, alors je recommande d'utiliser l'option 3: le format des pools de la Coupe du monde de football de la FIFA.
- dans tous les cas, je recommande de proscrire l'option 1.
On Wednesday June 28th, 2023, I give short a Introduction to Python/SageMath as an online course organized by Pierre-Guy Plamondon in Mathematical Summer in Paris (MSP23) on WorkAdventure. Below is the material that will be presented or suggested.
Exercises:
- Install and open a Jupyter notebook and do the User Interface Tour in the help menu.
- Programming with Python. Here is a list of Jupyter notebooks to learn programming in Python: ProgrammingExercises.zip or ProgrammingExercises.tar.xz
- Reproduce the computations made by BuzzFeedNews in a github repository of your choice, for instance about the fentanyl and cocaine overdose deaths (2018) or about The Tennis Racket (2016).
- Solve some problems from the Project Euler. Project Euler contains more than 500 exercises that have to be solved with a computer
- Reproduce one or more images from the matplotlib library.
- Download the book Mathematical Computation with Sage by Paul Zimmermann et al. about the SageMath open source software. Reproduce the computations made in a section of your choice in the book.
- Visit https://ask.sagemath.org/questions/ and try to reproduce some of the best answers to questions of interest for you.
- Choose a section of your choice in the SageMath very large Reference Manual and reproduce the computations made in it.
When working on the above, two principles applies:
- Once you finished solving a notebook or a problem on Project Euler on your own you need to explain your solution to at least one other person (who has already solved the same notebook or problem).
- Once you reproduced the computation made by BuzzFeedNews, matplotlib image or some computation, you need to present and explain it to at least one other person.
Supplementary material:
]]>
- Experimenting with Dynamical systems in SageMath: DynamicalSystemExercices.zip
- Some more notebooks and exercices from this course given by Vincent Delecroix at AIMS in Rwanda (2016).
Le chapeau est une tuile apériodique découverte par David Smith, Joseph Samuel Myers, Craig S. Kaplan, et Chaim Goodman-Strauss le 20 mars 2023. Suite à un exposé donné le 26 mars au National Museum of Mathematics, la nouvelle s'est vite répandue. En effet, cette découverte a été mentionnée les jours suivants dans des blogues puis dans Le New York Times le 28 mars, Le Monde le 29 mars, puis The Guardian et QuantaMagazine le 4 avril. Un vidéo de 20 minutes, réalisé par Passe-Science et publié début le 3 mai, explique le résultat et son contexte.
Déjà des articles proposant des résultats plus approfondis sur la tuile par des experts du domaine sont parus sur arXiv en mai 2023. Ils interprêtent les pavages comme des coupes et projection de réseaux de dimension supérieure. Le deuxième propose même une partition de la fenêtre de l'espace interne, un peu comme pour les pavages de Jeandel-Rao, à la différence qu'ici la partition a des bords fractales ce qui est pour moi une grande surprise.
Comme je faisais une intervention dans l'école de mon garçon à Bègles le 3 mai et au Lycée Kastler de Talence le 4 mai, j'ai réalisé un projet de découpe laser sur la tuile apériodique afin de partager cette récente découverte.
La première question était de construire un pavage d'un rectangle assez grand avec la pièce apériodique. Pour ce faire, j'ai ajouté un nouveau module dans mon package optionel au logiciel SageMath.
Le module réalise une réduction à une instance du problème de la couverture universelle, qui peut être résolu dans SageMath en utilisant l'algorithme des liens dansants de Donald Knuth, les solveurs SAT ou les programmes d'optimisation linéaire (solveur MILP). Le code utilise le système de coordonnées défini dans le fichier validate/kitegrid.pdf qui se trouve dans le code source associé à l'article.
Voici un exemple de construction d'un pavage avec la tuile apériodique. Le calcul est fait dans le logiciel SageMath muni de la version de développement de mon package optionnel slabbe qui peut être installé avec la commande sage -pip install slabbe. Ici, j'utilise le solveur SAT Glucose, développé au LaBRI. On peut installer glucose dans SageMath avec la commande sage -i glucose.
sage: from slabbe.aperiodic_monotile import MonotileSolver sage: s = MonotileSolver(16, 17) sage: G = s.draw_one_solution(solver='glucose') sage: G.save('solution_16x17.png') sage: G
Dans la manière de résoudre la question ci-haut, le problème est représenté par un problème de couverture exacte qui consiste à recouvrir exactement les entiers de 1 à n avec des sous-ensembles choisis dans une liste de sous-ensembles déterminés. Ici, on représente l'espace à recouvrir de manière discrète en comptant 6 points du plan par hexagone (un point pour chaque kite contenu dans un hexagone). Rappelons que la pièce Chapeau qui nous intéresse est formée d'une union d'exactement 8 de ces kites.
sage: s.plot_domain()
Ensuite, on construit une matrice de 0 et de 1 avec autant de colonnes que de points ci-haut (16 * 17 * 2 * 6 = 3264) et autant de lignes qu'il y a de copies isométriques de la pièce intersectant le domaine. Pour chaque copie de la pièce, une ligne dans la matrice contient des 1 exactement dans les colonnes associées aux kites occupés par la pièce.
sage: s.the_dlx_solver() Dancing links solver for 3264 columns and 7116 rows
Le calcul ci-haut qui a construit la matrice (sparse) indique qu'il y a 7116 copies isométriques de la pièce qui intersectent (complètement ou partiellement) le domaine. Quand on voudra dessiner une solution, on ignorera les pièces incomplètes.
On peut maintenant résoudre le problème.
sage: s = MonotileSolver(8,8) sage: %time L = s.one_solution() # l'algo des liens dansants de Knuth est utilisé par défaut CPU times: user 798 ms, sys: 32.2 ms, total: 830 ms Wall time: 1min 20s
Le contenu d'une solution est une liste de nombres indiquant les lignes de la matrice de 0/1 à considérer pour former une solution. C'est-à-dire que la sous-matrice restreinte aux lignes données comporte exactement un 1 dans chaque colonne:
sage: L [81, 85, 125, 128, ... 1772, 1783, 1794, 1815]
Ici, il se trouve que les solveurs SAT sont plus efficaces que l'algo des liens dansants pour trouver une solution:
sage: %time L = s.one_solution(solver='glucose') CPU times: user 326 ms, sys: 16.1 ms, total: 342 ms Wall time: 526 ms sage: %time L = s.one_solution(solver='kissat') CPU times: user 335 ms, sys: 3.64 ms, total: 339 ms Wall time: 461 ms
En effet, Glucose se comporte plutôt bien pour résoudre des problèmes de pavages du plan lorsqu'il existe une solution. Mais lorsqu'il n'y a pas de solution, l'algo des liens dansants de Knuth est parfois mieux. Aussi, l'algo des liens dansants de Knuth est très efficace pour énumérer toutes les solutions.
Le solveur Kissat a été ajouté dans SageMath par moi-même comme package optionnel cette année suite à une discussion avec Laurent Simon au café du LaBRI. On peut installer le solveur kissat dans SageMath avec la commande sage -i kissat.
Ici on extrait le contour des pièces d'une solution (tel que chaque arête est dessinée une seule fois afin d'éviter que la découpeuse laser passe deux fois par chaque arête ce qui peut endommager ou brûler le bord des pièces en bois) et on crée un fichier pdf ou svg. Je choisis une taille de 16 double-hexagones horizontalement et 17 verticalement, car cela crée un fichier qui correspond à une taille de 1m x 60cm. C'est la taille de la découpeuse laser à notre disposition:
sage: s = MonotileSolver(16, 17) sage: tikz = s.one_solution_tikz(solver='glucose') sage: tikz.pdf('solution_16x17.pdf') sage: tikz.svg('solution_16x17.svg') # or
Avec l'aide de David Renault, mon collègue du LaBRI qui enseigne à l'ENSEIRB et qui m'a déjà accompagné dans la réalisation de projets de découpe laser, nous avons découpé le fichier ci-haut le jeudi 27 avril au EirLab, l'atelier de fabrication numérique (FabLab) de l'ENSEIRB-MATMECA:
Comme toujours, il faut quelque peu modifier le fichier svg dans Inkscape avant de lancer la découpe laser. Voici le fichier modifié juste avant la découpe.
Maintenant, on peut s'amuser avec les pièces:
Avec mes garçons, nous avons trouvé une forme intéressante qui recouvre le plan périodiquement à l'exception d'un trou hexagonal. Il se trouve que la même forme peut-être créée de deux façons différentes: sur l'image ci-bas la forme à droite est la globalement la même, mais elle n'est pas obtenue de la même façon que celle en haut à gauche. Pourtant, toutes deux ont le même contour extérieur et le même trou hexagonal.
Cette observation, déjà faite par d'autres, a mené au recouvrement d'une sphère avec la pièce et un trou pentagonal:
]]>Dans ce message, je suggère un format de tournoi pour 7 équipes permettant de qualifier une équipe pour l'étape suivante (compétitions nationales par exemple). En bref, la solution suggérée est:
- Faire tournoi à la ronde sur trois jours (2 matchs par jour par équipe)
- Le quatrième jour, les équipes classées #5, #6 et #7 sont éliminées et font un tournoi à la ronde entre elles (donc deux matchs chacune).
- Le quatrième jour, on a les demie-finales #1 vs #4 et #2 vs #3 ainsi que les finales et petite finale.
Plus précisément, voici les détails jour par jour.
Jour 1
Le jour 1, il y a surtout des matchs le top 4 entre eux, et le 5-6-7 entre eux:
Ronde | Terrain 1 | Terrain 2 |
---|---|---|
Ronde 1 | 5 v 6 | 2 v 7 |
Ronde 2 | 3 v 4 | |
Ronde 3 | 5 v 7 | 1 v 2 |
Ronde 4 | 1 v 4 | 3 v 6 |
Note: le match 1v2 peut être devancé de 30 minutes pour laisser plus de temps entre les deux matchs de l'équipe 1.
Jour 2
Le jour 2, il y a des matchs importants pour séparer le top 4 du 5-6-7:
Ronde | Terrain 1 | Terrain 2 |
---|---|---|
Ronde 5 | 2 v 6 | 3 v 7 |
Ronde 6 | 1 v 5 | |
Ronde 7 | 2 v 3 | 4 v 6 |
Ronde 8 | 1 v 7 | 4 v 5 |
Note: le match 4 v 6 peut être devancé de 30 minutes pour laisser plus de temps entre les deux matchs de l'équipe 4.
Jour 3
Le jour 3, il y a surtout des matchs moins importants entre équipes éloignées dans le préclassement:
Ronde | Terrain 1 | Terrain 2 | Terrain 3 |
---|---|---|---|
Ronde 9 | 6 v 7 | ||
Ronde 10 | 1 v 3 | 2 v 5 | 4 v 7 |
Ronde 11 | 1 v 6 | 2 v 4 | 3 v 5 |
Ronde | Terrain 1 | Terrain 2 |
---|---|---|
Ronde 9 | 1 v 3 | 6 v 7 |
Ronde 10 | 2 v 5 | |
Ronde 11 | 1 v 6 | 4 v 7 |
Ronde 12 | 2 v 4 | 3 v 5 |
Note: le match 4 v 7 peut être devancé de 30 minutes pour laisser plus de temps entre les deux matchs de l'équipe 4.
Jour 4
Je me suis demandé si à la fin du troisième jour, on fait jouer des pré-demis: #6 vs #3 et #5 vs #4, mais ce n'est pas ce qui est recommandé dans le manuel de USAU Ultimate pour un format à 7 équipes dont une seule est qualifiée pour la montée. En effet, cela donne moins d'importance à bien jouer dans le tournoi à la ronde, et donne moins d'importance aux deux premiers jours. Comme une seule équipe est qualifiée, c'est normal je pense d'en éliminer 3 après le tournoi à la ronde.
]]>I was asked by email how to compute with SageMath the factor complexity of words generated by multidimensional continued fraction algorithms. I'm copying my answer here so that I can more easily share it.
A) How to calculate the factor complexity of a word
To compute the complexity in factors, we need a finite word and not an infinite infinite word. In the example below, I take a prefix of the Fibonacci word and I compute the number of factors of size 100 and of size 0 to 19:
sage: w = words.FibonacciWord() sage: w word: 0100101001001010010100100101001001010010... sage: prefix = w[:100000] sage: prefix.number_of_factors(100) 101 sage: [prefix.number_of_factors(i) for i in range(20)] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
The documentation for the number_of_factors method contains more examples, etc.
B) How to construct an S-adic word in SageMath
The method words.s_adic in SageMath allows to construct an S-adic sequence from a directive sequence, a set of substitutions and a sequence of first letters.
For example, we may use Kolakoski word as a directive sequence:
sage: directive_sequence = words.KolakoskiWord() sage: directive_sequence word: 1221121221221121122121121221121121221221...
Then, I define the Thue-Morse and Fibonacci substitutions:
sage: tm = WordMorphism('a->ab,b->ba') sage: fib = WordMorphism('a->ab,b->a') sage: tm WordMorphism: a->ab, b->ba sage: fib WordMorphism: a->ab, b->a
Then, to define an S-adic sequence, I also need to define the sequence of first letters. Here, it is always the constant sequence a,a,a,a,...:
sage: from itertools import repeat sage: letters = repeat('a')
I associate the letter 1 in the Kolakoski sequence to the Thue-Morse morphism and 2 to the Fibonacci morphism, this allows to construct an S-adic sequence:
sage: w = words.s_adic(directive_sequence, letters, {1:tm, 2:fib}) sage: w word: abbaababbaabbaabbaababbaabbaababbaababba...
Then, as above, I can take a prefix and compute its factor complexity:
sage: prefix = w[:100000] sage: [prefix.number_of_factors(i) for i in range(20)] [1, 2, 4, 6, 7, 8, 9, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 34, 38]
C) Creating an S-adic sequence from Brun algorithm
With the package slabbe, you can construct an S-adic sequence from some of the known Multidimensional Continued Fraction Algorithm.
One can install it by running sage -pip install slabbe in a terminal where sage command exists. Sometimes this does not work.
Then, one may do:
sage: from slabbe.mult_cont_frac import Brun sage: algo = Brun() sage: algo Brun 3-dimensional continued fraction algorithm sage: D = algo.substitutions() sage: D {312: WordMorphism: 1->12, 2->2, 3->3, 321: WordMorphism: 1->1, 2->21, 3->3, 213: WordMorphism: 1->13, 2->2, 3->3, 231: WordMorphism: 1->1, 2->2, 3->31, 123: WordMorphism: 1->1, 2->23, 3->3, 132: WordMorphism: 1->1, 2->2, 3->32}
sage: directive_sequence = algo.coding_iterator((1,e,pi)) sage: [next(directive_sequence) for _ in range(10)] [123, 312, 312, 321, 132, 123, 312, 231, 231, 213]
Construction of the s-adic word from the substitutions and the directive sequence:
sage: from itertools import repeat sage: D = algo.substitutions() sage: directive_sequence = algo.coding_iterator((1,e,pi)) sage: words.s_adic(directive_sequence, repeat(1), D) word: 1232323123233231232332312323123232312323...
Shortcut:
sage: algo.s_adic_word((1,e,pi)) word: 1232323123233231232332312323123232312323...
There are some more examples in the documentation.
This code was used in the creation of the 3-dimensional Continued Fraction Algorithms Cheat Sheets 7 years ago during my postdoc at Université de Liège, Belgium.
]]>UPDATE (Oct 24, 2022), thanks to comments from reddit: fixed the way standard deviation is presented to avoid misinterpretation, removed the sin(-x) graphics.
When I started to play ultimate in September 2002 in Sherbrooke, the local team Stakatak was just coming back their very first (or maybe second?) participation at the Canadian Ultimate Championship, in the mixed division. I remember they lost all of their nine games and finished 16th out of 16 teams. But they came back in Sherbrooke with the "Spirit of the Game" award which we were very proud of.
What I want to discuss here is not whether the team that lose all its games and wins the Spirit of the Game deserves it or not. The question I want to consider in this blog post is about the evaluation of the spirit of the game in a typical ultimate frisbee game: are we biased by the end result of the game (win vs lose) when we evaluate the opponent's spirit of the game? In particular:
- do we give more spirit points to the opponent team when the opponent has lost against us?
- do we give less spirit points to the opponent team when the opponent has won against us?
The Spirit of the Game
As not everyone reading this post ever played an ultimate frisbee game, let me recall what is the spirit of the game and how it is evaluated nowadays in a tournament. As Ultimate (frisbee) is a self-officiated team sport, the spirit of the game is important. Every team thinks they have a good spirit but not every opponent agree. There are ways for teams to help (or force) them improve their spirit of the game, the most important one being the end of game discussion during which the two teams discuss the game and if necessary any issues that happenned during the game. Another is the evaluation of the spirit of the game by the opponent team, which is made by evaluating 5 subjects:
- Rules Knowledge and Use (4 points)
- Fouls and Body Contact (4 points)
- Fair-Mindedness (4 points)
- Positive Attitude and Self-Control (4 points)
- Communication (4 points)
In Ultimate tournaments, the spirit scores of each team is public and allows to evaluate and rank each team. When a team is low ranked, it shows without ambiguity that the community thinks this team needs to improve because it was badly evaluated by more than one team. This peer-pressure contributes to make teams improve themselve. The ranking is also used to elect a most spirited team which is often given a "Spirit of the game" trophee at the end of the tournament.
Three tournaments considered for the data analysis
For the data analysis, we consider the following three tournaments that were held during Summer 2022:
- World Masters Ultimate Club Championships (WMUCC) 2022, Limerick, Ireland, June 25th to July 2nd 2022
- World Ultimate Club Championships (WUCC) 2022, Cincinnati Ohio, USA, July 23-30, 2022
- Canadian Ultimate Championships (CUC) 2022, Brampton, Ontario, August 18-21, 2022
These tournaments all use the Ultiorganizer website which allows to parse the results with the same Python script which I have made public.
In total, 1540 games were played in these 3 tournaments. Unfortunately, the score or the spirit score was not completed or is not available for all games. Maybe because teams forgot to provide the spirit scores or maybe the game was not played at all. We were able to access all needed data including final score and spirit scores for 1448 of the games (94 %). Since the spirit of both teams gets evaluated during a game, this means 1448 x 2 = 2896 evaluations of a team spirit.
Tournament | Number of games | Number of games with complete data |
---|---|---|
WMUCC 2022 | 589 | 548 (93 %) |
WUCC 2022 | 652 | 628 (96 %) |
CUC 2022 | 299 | 272 (91 %) |
Total | 1540 | 1448 (94 %) |
Average Spirit Points
The average spirit points received by a team is shown in the table below for each of the three considered tournaments.
Tournament | mean | std |
---|---|---|
WMUCC 2022 | 11.307 | 1.830 |
WUCC 2022 | 10.561 | 1.685 |
CUC2022 | 10.821 | 2.009 |
Spirit points are on average slightly above 10. Also, at WMUCC, the spirit points were higher in general, slightly above 11. We may interpret this as the fact that older master players playing for a long time were happy to play again at the international level after the pandemia and meet old friends which contributed to have nice spirited games in Limerick (why did not I try to go at Limerick again? I miss my old friends from Epoq or Nsom or Quarantine!).
Average Spirit Points for losers/winners
Now let's compare the average spirit points given to the loser of a game vs to the winner of a game. In the three considered tournaments, on average it turns out that the loser of the game always gets more spirit points. See the results in the following table.
Tournament | When winning (mean; std) | When losing (mean; std) |
---|---|---|
WMUCC 2022 | 10.904; 1.756 | 11.709; 1.815 |
WUCC 2022 | 10.323; 1.707 | 10.800; 1.630 |
CUC2022 | 10.566; 2.101 | 11.076; 1.882 |
We visualize below the distribution of spirit scores at WMUCC 2022 for losers vs winners with the following box plot graphics made with matplotlib through the pandas library. Small circle indicate what is called flier points. As mentionned in the matplotlib boxplot documentation, "flier points are those past the end of the whiskers".
Are losers more spirited in ultimate?
We can now answer the question asked in the title of this blog post and the answer is yes: the data says that losers are more spirited in Ultimate.
Or, alternatively, we can assume the hypothesis that losers and winners are equally spirited. This assumption implies that players must be biased by the end result of the game when evaluating the opponent's spirit of the game.
Lose-win Bias
It is natural to define the lose-win bias as the difference between the average spirit points obtained by the losing team and the winning team. In other words, how much more spirited are losers than winners? The results is in the following table.
Tournament | lose-win bias |
---|---|
WMUCC 2022 | 0.8043 |
WUCC 2022 | 0.4768 |
CUC2022 | 0.5101 |
At WUCC 2022 and CUC 2022, the losing team gets approximatively 0.5 more spirit points than the winning team. During WMUCC 2022, the losing team was obtaining 0.8 more spirit points than the winning team.
Interpretation
How can we interpret these results? Are losers really more spirited or can we accept that we are biased? Is there any other way to interpret the above results?
My interpretation is that we are biased by the end result which means winners and losers will say something like this (if I allow myself to caricature in a provocative way):
"Dear opponent, thanks for losing, we will give you one more spirit point for not making more effort."
"Dear opponent, thanks for the game, you won against us, but your communication was not so good, we give you one point less than we would have usually gave if you would have accepted to lose the game."
Of course I am volontarily exagerating and being a little provocative here to make us think about our own biases. We would never say sentences like this, but, basicaly, I think we might be actually doing this sometimes in a more disguised way.
I think that it is necessary that every ultimate frisbee player know about the existence of this lose-win bias in order to become more objective when evaluating the spirit of the game of the opponent.
Spirit score per score differential
I suggest now to go a bit further in the data analysis. Instead of splitting the spirit points according to the two win or lose cases, we can study the spirit points according to the score differential. This should allow us to answer interesting questions such as:
- Is winning by 1 point the worse thing to do to get a good spirit?
- By how many points should a team win to expect the most spirit points?
- By how many points should a team win to leverage the spirit bias?
Below is a graphics which shows the average spirit point obtained by a team according to the point differential during the three considered tournaments:
We observe that spirit scores were in general higher at WMUCC 2022. Also, we can see that each curve reach its minimum around +1 or +2, which means you want to win by more than one or two points if you want to win and maximize your spirit points.
Bias per score differential
In what follows, we will discuss the bias per score differential. Here is the graphics summarizing the average lose-win bias according to the end of game point differential (one broken line per tournament):
Let's first try to explain the above graphics. The x-axis shows by how much you won the game: +2 means you won the game by 2, and -5 means you lost the game by 5. The graphics shows for each score difference, the average difference between your team spirit score and the opponent spirit score for each of the three tournaments. Take for example the case when your team win by 2, the graphics shows that on average in all of the three tournaments, the spirit score you get is 1 less than the losing team.
Some similarities appear in the three tournaments. On the right part of the graphics, for positive values on the x-axis, the graphic lines are below zero whereas for negative values on the x-axis, the graphic lines are above zero. This essentially means that losers have on average a better spirit evaluation than winners.
One could have expected that winning by one point is worse than winning by two points, because it is in this kind of game that a single action where a travel or fault is called that may affect the outcome of the game, thus affecting the spirit results. But the data shows that winning by 2 is worse than winning by 1 in terms of spirit bias. A possible interpretation goes as follows. When you lose on the universe point, you show to everyone that you were very close to win which is a honorific way of losing. On the other hand, losing by 2 does not allow you to pretend you were close enough to win the game. This may explain why the spirit bias is higher for game finishing by a difference of 2 points compared to 1 point.
Another pattern which is common in each of the three lines is that a local minimum for the lose-win bias is reached when winning by 2. It seems that winning by a higher margin (3, 4, 5 or 6 points) makes the lose-win bias globally closer to zero.
My personnal interpretation is as follows. Winning by 1 or 2 points is not good for your spirit points, because you basically allow your opponent to think that they could have won the game (which may make them biased when evaluating your spirit points). Winning by 5 or 6 points seems to neutralize the win-lose bias. This is enough a point difference which establishes a hierarchy and makes the opponent accept their lost, but not too much that part of the game become meaningless which may impact the fun of both team to play the game.
When the score difference increases, what happens is more chaotic, so I don't know if we can make any safe interpretations, but it seems winning by 7, 8 or 9 is not good for your spirit. And then, winning by exactly 10 points seems also to neutralize the bias. There are fewer games finishing by a difference of more than 10 points, so I will not discuss their statistics here.
Conclusion
To conclude, I would like to recall the real objective of this blog post which is to make the ultimate frisbee players acknowledge the existence of biases when evaluating the opponent's spirit of game, one bias being related to the score result outcome. Once this is acknowledged, the next step is to search for ways to leverage the bias. This task belongs to each and every ultimate frisbee players.
Code and data
I made my code public allows to reproduce the computations and graphics. Everything is in this gitlab repository:
https://gitlab.com/seblabbe/spirit-bias-in-ultimate
The code is written in Python. Data is downloaded with urllib library and stored as csv files. Parsing of Ultiorganizer websites is done in a Python script ultiorganizer_parser.py that I wrote. Analysis of the csv files is done with pandas library with Jupyter notebooks. Graphics are made with matplotlib.
]]>Jeudi 16 juin 2022, en marchant du centre vers le parc de la Boverie pour le rendez-vous du jeudi soir de la conférence DYADISC, j'ai eu l'agréable surprise de voir que trois des 25 suggestions que j'avais faites en 2016 ont été réalisées:
- Créer une piste cyclable dans la trémie parking du pont Kennedy, afin de désengorger la ravel qui devrait être réservée aux piétons qui marchent le long de la meuse.
- Sur le quai Marcellis, remplacer du stationnement par une piste cyclable qui emprunte la trémie du pont Kennedy, pour la même raison de réserver le ravel aux piétons dans Outremeuse.
- Créer une piste cyclable sur la place Cockerill pour prolonger le flux de vélo qui emprunte la passerelle piétonne devant la Grande Poste.
C'est super!
]]>In his Dancing links article, Donald Knuth considered the problem of packing 45 Y pentaminoes into a 15 x 15 square. We can redo this computation in SageMath using some implementation of his dancing links algorithm.
Dancing links takes 1.24 seconds to find a solution:
sage: from sage.combinat.tiling import Polyomino, TilingSolver sage: y = Polyomino([(0,0),(1,0),(2,0),(3,0),(2,1)]) sage: T = TilingSolver([y], box=(15, 15), reusable=True, reflection=True) sage: %time solution = next(T.solve()) CPU times: user 1.23 s, sys: 11.9 ms, total: 1.24 s Wall time: 1.24 s
The first solution found is:
sage: sum(T.row_to_polyomino(row_number).show2d() for row_number in solution)
What is nice about dancing links algorithm is that it can list all solutions to a problem. For example, it takes less than 3 minutes to find all solutions of tiling a 15 x 15 rectangle with the Y polyomino:
sage: %time T.number_of_solutions() CPU times: user 2min 46s, sys: 3.46 ms, total: 2min 46s Wall time: 2min 46s 1696
It takes more time (38s) to find a first solution of a larger 20 x 20 rectangle:
sage: T = TilingSolver([y], box=(20,20), reusable=True, reflection=True) sage: %time solution = next(T.solve()) CPU times: user 38.2 s, sys: 7.88 ms, total: 38.2 s Wall time: 38.2 s
The polyomino tiling problem is reduced to an instance of the universal cover problem which is represented by a sparse matrix of 0 and 1:
sage: dlx = T.dlx_solver() sage: dlx Dancing links solver for 400 columns and 2584 rows
We observe that finding a solution to this problem takes the same amount of time. This is normal since it is exactly what is used behind the scene when calling next(T.solve()) above:
sage: %time sol = dlx.one_solution(ncpus=1) CPU times: user 38.6 s, sys: 48 ms, total: 38.6 s Wall time: 38.5 s
One way to improve the time it takes it to split the problem into parts and use many processors to work on each subproblems. Here a random column is used to split the problem which may affect the time it takes. Sometimes a good column is chosen and it works great as below, but sometimes it does not:
sage: %time sol = dlx.one_solution(ncpus=2) CPU times: user 941 µs, sys: 32 ms, total: 32.9 ms Wall time: 1.41 s
The reduction from dancing links instance to SAT instance #29338 and to MILP instance #29955 was merged into SageMath 9.2 during the last year. A discussion with Franco Saliola motivated me to implement these translations since he was also searching for faster way to solve dancing links problems. Indeed some problems are solved faster with other kind of solver, so it is good to make some comparisons between solvers.
Therefore, with a recent enough version of SageMath, we can now try to find a tiling with other kinds of solvers. Following my experience with tilings by Wang tiles, I know that Glucose SAT solver is quite efficient to solve tilings of the plane. This is why I test this one below. Glucose is now an optional package to SageMath which can be installed with:
sage -i glucose
Update (June 20th, 2022): It seems sage -i glucose no longer works. The new procedure is to use ./configure --enable-glucose when installation is made from source. See the question Unable to install glucose SAT solver with Sage on ask.sagemath.org for more information.
Glucose finds the solution of a 20 x 20 rectangle in 1.5 seconds:
sage: %time sol = dlx.one_solution_using_sat_solver('glucose') CPU times: user 306 ms, sys: 12.1 ms, total: 319 ms Wall time: 1.51 s
The rows of the solution found by Glucose are:
sage: sol [0, 15, 19, 38, 74, 245, 270, 310, 320, 327, 332, 366, 419, 557, 582, 613, 660, 665, 686, 699, 707, 760, 772, 774, 781, 802, 814, 816, 847, 855, 876, 905, 1025, 1070, 1081, 1092, 1148, 1165, 1249, 1273, 1283, 1299, 1354, 1516, 1549, 1599, 1609, 1627, 1633, 1650, 1717, 1728, 1739, 1773, 1795, 1891, 1908, 1918, 1995, 2004, 2016, 2029, 2037, 2090, 2102, 2104, 2111, 2132, 2144, 2146, 2185, 2235, 2301, 2460, 2472, 2498, 2538, 2548, 2573, 2583]
Each row correspond to a Y polyomino embedded in the plane in a certain position:
sage: sum(T.row_to_polyomino(row_number).show2d() for row_number in sol)
Glucose-Syrup (a parallelized version of Glucose) takes about the same time (1 second) to find a tiling of a 20 x 20 rectangle:
sage: T = TilingSolver([y], box=(20, 20), reusable=True, reflection=True) sage: dlx = T.dlx_solver() sage: dlx Dancing links solver for 400 columns and 2584 rows sage: %time sol = dlx.one_solution_using_sat_solver('glucose-syrup') CPU times: user 285 ms, sys: 20 ms, total: 305 ms Wall time: 1.09 s
Searching for a tiling of a 30 x 30 rectangle, Glucose takes 40s and Glucose-Syrup takes 16s while dancing links algorithm takes much longer (next(T.solve()) which is using dancing links algorithm does not halt in less than 5 minutes):
sage: T = TilingSolver([y], box=(30,30), reusable=True, reflection=True) sage: dlx = T.dlx_solver() sage: dlx Dancing links solver for 900 columns and 6264 rows sage: %time sol = dlx.one_solution_using_sat_solver('glucose') CPU times: user 708 ms, sys: 36 ms, total: 744 ms Wall time: 40.5 s sage: %time sol = dlx.one_solution_using_sat_solver('glucose-syrup') CPU times: user 754 ms, sys: 39.1 ms, total: 793 ms Wall time: 16.1 s
Searching for a tiling of a 35 x 35 rectangle, Glucose takes 2min 5s and Glucose-Syrup takes 1min 16s:
sage: T = TilingSolver([y], box=(35, 35), reusable=True, reflection=True) sage: dlx = T.dlx_solver() sage: dlx Dancing links solver for 1225 columns and 8704 rows sage: %time sol = dlx.one_solution_using_sat_solver('glucose') CPU times: user 1.07 s, sys: 47.9 ms, total: 1.12 s Wall time: 2min 5s sage: %time sol = dlx.one_solution_using_sat_solver('glucose-syrup') CPU times: user 1.06 s, sys: 24 ms, total: 1.09 s Wall time: 1min 16s
Here are the info of the computer used for the above timings (a 4 years old laptop runing Ubuntu 20.04):
$ lscpu Architecture : x86_64 Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit Boutisme : Little Endian Address size : 39 bits physical, 48 bits virtual Processeur(s) : 8 Liste de processeur(s) en ligne : 0-7 Thread(s) par coeur : 2 Coeur(s) par socket : 4 Socket(s) : 1 Noeud(s) NUMA : 1 Identifiant constructeur : GenuineIntel Famille de processeur : 6 Modèle : 158 Nom de modèle : Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz Révision : 9 Vitesse du processeur en MHz : 3549.025 Vitesse maximale du processeur en MHz : 3900,0000 Vitesse minimale du processeur en MHz : 800,0000 BogoMIPS : 5799.77 Virtualisation : VT-x Cache L1d : 128 KiB Cache L1i : 128 KiB Cache L2 : 1 MiB Cache L3 : 8 MiB Noeud NUMA 0 de processeur(s) : 0-7
To finish, I should mention that the implementation of dancing links made in SageMath is not the best one. Indeed, according to what Franco Saliola told me, the dancing links code written by Donald Knuth himself and available on his website (franco added some makefile to compile it more easily) is faster. It would be interesting to confirm this and if possible improves the implementation made in SageMath.
]]>L'École doctorale de mathématiques et informatique (EDMI) de l'Université de Bordeaux offre des cours chaque année. Dans ce cadre, cette année, je donnerai le cours Calcul et programmation avec Python ou SageMath et meilleures pratiques qui aura lieu les 25 février, 4 mars, 11 mars, 18 mars 2021 de 9h à 12h.
Le créneau du jeudi matin correspond au créneau des Jeudis Sage au LaBRI où un groupe d'utilisateurs de Python se rencontrent toutes les semaines pour faire du développement tout en posant des questions aux autres utilisateurs présents.
Le cours aura lieu sur le logiciel BigBlueButton auquel les participantEs inscritEs se connecteront par leur navigateur web (Mozilla Firefox ou Google Chrome). Selon les exigences minimales du client BigBlueButton, il faut éviter Safari ou IE, sinon certaines fonctionnalités ne marchent pas. Vous pouvez vous familiariser avec l'interface BigBlueButton en écoutant ce tutoriel BigBlueButton (sur youtube, 5 minutes). Consultez les pages de support suivantes en cas de soucis audio ou internet.
Pour la première séance, nous présenterons les bases de Python et les différentes interfaces. Nous ne pourrons pas passer trop de temps à faire l'installation des différents logiciels. Il serait donc préférable si les installations ont déjà été faites avant le cours par chacun des participantEs. Cela vous permettra de reproduire les commandes montrées et faire des exercices.
Les logiciels à installer avant le cours sont:
- Python3
- SageMath (facultatif)
- IPython
- Jupyter notebook classique
- JupyterLab
Python 3: Normalement, Python est déjà installé sur votre ordinateur. Vous pouvez le confirmer en tapant python ou python3 dans un terminal (Linux/Mac) ou dans l'invité de commande (Windows). Vous devriez obtenir quelque chose qui ressemble à ceci:
Python 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>>
SageMath (facultatif): SageMath est un logiciel libre de mathématiques basé sur Python et regroupant des centaines de packages et librairies. Il y a plusieurs manières d'installer SageMath, et je vous recommande de lire cette documentation pour déterminer la manière de l'installer qui vous convient le mieux. Sinon, vous pouvez télécharger directement les binaires ici.
Vous devriez obtenir quelque chose qui ressemble à ceci:
┌────────────────────────────────────────────────────────────────────┐ │ SageMath version 9.2, Release Date: 2020-10-24 │ │ Using Python 3.8.5. Type "help()" for help. │ └────────────────────────────────────────────────────────────────────┘ sage:
IPython:
Si vous avez déjà installé SageMath, c'est bon, car ipython en fait partie. La commande sage -ipython vous permettra de l'ouvrir.
Si vous n'avez pas SageMath, vous pouvez l'installer via pip install ipython ou sinon en suivant ces instructions du site ipython. Ensuite, la commande ipython dans le terminal (Linux, OS X) ou dans l'invité de commande (Windows) vous permettra de l'ouvrir.
Vous devriez obtenir quelque chose qui ressemble à ceci:
Python 3.8.5 (default, Jul 28 2020, 12:59:40) Type 'copyright', 'credits' or 'license' for more information IPython 7.13.0 -- An enhanced Interactive Python. Type '?' for help. In [1]:
Jupyter:
Si vous avez déjà installé SageMath, c'est bon, car Jupyter en fait partie. La commande sage -n jupyter vous permettra de l'ouvrir.
Si vous n'avez pas SageMath, vous pouvez suivre ces instructions du site jupyter.org. Ensuite, la commande jupyter notebook dans le terminal (Linux, OS X) ou dans l'invité de commande (Windows) vous permettra de l'ouvrir.
Vous devriez obtenir quelque chose qui ressemble à ceci dans votre navigateur:
JupyterLab:
Si vous avez déjà installé SageMath, vous pouvez installer JupyterLab en faisant sage -i jupyterlab et l'ouvrir en faisant sage -n jupyterlab. Sur Windows, c'est un tout petit peut différent, il faut plutôt faire pip install jupyterlab dans la console SageMath selon cette récente réponse sur ask.sagemath.org.
Si vous n'avez pas SageMath, vous pouvez suivre les instructions du même site que ci-haut. Ensuite, la commande jupyter-lab ou jupyter lab dans le terminal (Linux, OS X) ou dans l'invité de commande (Windows) vous permettra de l'ouvrir.
Vous devriez obtenir quelque chose qui ressemble à ceci dans votre navigateur:
]]>Suppose that you 3D print many copies of the following 3D hexo-mino at home:
sage: from sage.combinat.tiling import Polyomino, TilingSolver sage: p = Polyomino([(0,0,0), (0,1,0), (1,0,0), (2,0,0), (2,1,0), (2,1,1)], color='blue') sage: p.show3d() Launched html viewer for Graphics3d Object
You would like to know if you can tile a larger polyomino or in particular a rectangular box with many copies of it. The TilingSolver module in SageMath is made for that. See also this recent question on ask.sagemath.org.
sage: T = TilingSolver([p], (7,5,3), rotation=True, reflection=False, reusable=True) sage: T Tiling solver of 1 pieces into a box of size 24 Rotation allowed: True Reflection allowed: False Reusing pieces allowed: True
There is no solution when tiling a box of shape 7x5x3 with this polyomino:
sage: T.number_of_solutions() 0
But there are 4 solutions when tiling a box of shape 4x3x2 with this polyomino:
sage: T = TilingSolver([p], (4,3,2), rotation=True, reflection=False, reusable=True) sage: T.number_of_solutions() 4
We construct the list of solutions:
sage: solutions = [sol for sol in T.solve()]
Each solution contains the isometric copies of the polyominoes tiling the box:
sage: solutions[0] [Polyomino: [(0, 0, 0), (0, 0, 1), (0, 1, 0), (1, 1, 0), (2, 0, 0), (2, 1, 0)], Color: #ff0000, Polyomino: [(0, 1, 1), (0, 2, 0), (0, 2, 1), (1, 1, 1), (2, 1, 1), (2, 2, 1)], Color: #ff0000, Polyomino: [(1, 0, 0), (1, 0, 1), (2, 0, 1), (3, 0, 0), (3, 0, 1), (3, 1, 0)], Color: #ff0000, Polyomino: [(1, 2, 0), (1, 2, 1), (2, 2, 0), (3, 1, 1), (3, 2, 0), (3, 2, 1)], Color: #ff0000]
It may be easier to visualize the solutions, so we define the following function allowing to draw the solutions with different colors for each piece:
sage: def draw_solution(solution, size=0.9): ....: colors = rainbow(len(solution)) ....: for piece,col in zip(solution, colors): ....: piece.color(col) ....: return sum((piece.show3d(size=size) for piece in solution), Graphics())
sage: G = [draw_solution(sol) for sol in solutions] sage: G [Graphics3d Object, Graphics3d Object, Graphics3d Object, Graphics3d Object]
sage: G[0] # in Sage, this will open a 3d viewer automatically
sage: G[1]
sage: G[2]
sage: G[3]
We may save the solutions to a file:
sage: G[0].save('solution0.png', aspect_ratio=1, zoom=1.2) sage: G[1].save('solution1.png', aspect_ratio=1, zoom=1.2) sage: G[2].save('solution2.png', aspect_ratio=1, zoom=1.2) sage: G[3].save('solution3.png', aspect_ratio=1, zoom=1.2)
Question: are all of the 4 solutions isometric to each other?
The tiling problem is solved due to a reduction to the exact cover problem for which dancing links Knuth's algorithm provides all the solutions. One can see the rows of the dancing links matrix as follows:
sage: d = T.dlx_solver() sage: d Dancing links solver for 24 columns and 56 rows sage: d.rows() [[0, 1, 2, 4, 5, 11], [6, 7, 8, 10, 11, 17], [12, 13, 14, 16, 17, 23], ... [4, 6, 7, 9, 10, 11], [10, 12, 13, 15, 16, 17], [16, 18, 19, 21, 22, 23]]
The solutions to the dlx solver can be obtained as follows:
sage: it = d.solutions_iterator() sage: next(it) [3, 36, 19, 52]
These are the indices of the rows each corresponding to an isometric copy of the polyomino within the box.
Since SageMath-9.2, the possibility to reduce the problem to a MILP problem or a SAT instance was added to SageMath (see #29338 and #29955):
sage: d.to_milp() (Boolean Program (no objective, 56 variables, 24 constraints), MIPVariable of dimension 1) sage: d.to_sat_solver() CryptoMiniSat solver: 56 variables, 2348 clauses.
In November 2015, I wanted to share intuitions I developped on the behavior of various distinct Multidimensional Continued Fractions algorithms obtained from various kind of experiments performed with them often involving combinatorics and digitial geometry but also including the computation of their first two Lyapunov exponents.
As continued fractions are deeply related to the combinatorics of Sturmian sequences which can be seen as the digitalization of a straight line in the grid \(\mathbb{Z}^2\), the multidimensional continued fractions algorithm are related to the digitalization of a straight line and hyperplanes in \(\mathbb{Z}^d\).
This is why I shared those experiments in what I called 3-dimensional Continued Fraction Algorithms Cheat Sheets because of its format inspired from typical cheat sheets found on the web. All of the experiments can be reproduced using the optional SageMath package slabbe where I share my research code. People asked me whether I was going to try to publish those Cheat Sheets, but I was afraid the format would change the organization of the information and data in each page, so, in the end, I never submitted those Cheat Sheets anywhere.
Here I should say that \(d\) stands for the dimension of the vector space on which the involved matrices act and \(d-1\) is the dimension of the projective space on which the algorithm acts.
One of the consequence of the Cheat Sheets is that it made us realize that the algorithm proposed by Julien Cassaigne had the same first two Lyapunov exponents as the Selmer algorithm (first 3 significant digits were the same). Julien then discovered the explanation as its algorithm is conjugated to some semi-sorted version of the Selmer algorihm. This result was shared during WORDS 2017 conference. Julien Leroy, Julien Cassaigne and I are still working on the extended version of the paper. It is taking longer mainly because of my fault because I have been working hard on aperiodic Wang tilings in the previous 2 years.
During July 2019, Wolfgang, Valérie and Jörg asked me to perform computations of the first two Lyapunov exponents for \(d\)-dimensional Multidimensional Continued Fraction algorithms for \(d\) larger than 3. The main question of interest is whether the second Lyapunov exponent keeps being negative as the dimension increases. This property is related to the notion of strong convergence almost everywhere of the simultaneous diopantine approximations provided by the algorithm of a fixed vector of real numbers. It did not take me too long to update my package since I had started to generalize my code to larger dimensions during Fall 2017. It turns out that, as the dimension increases, all known MCF algorithms have their second Lyapunov exponent become positive. My computations were thus confirming what they eventually published in their preprint in November 2019.
My motivation for sharing the results is the conference Multidimensional Continued Fractions and Euclidean Dynamics held this week (supposed to be held in Lorentz Center, March 23-27 2020, it got cancelled because of the corona virus) where some discussions during video meetings are related to this subject.
The computations performed below can be summarized in one graphics showing the values of \(1-\theta_2/\theta_1\) with respect to \(d\) for various \(d\)-dimensional MCF algorithms. It seems that \(\theta_2\) is negative up to dimension 10 for Brun, up to dimension 4 for Selmer and up to dimension 5 for ARP.
I have to say that I was disapointed by the results because the algorithm Arnoux-Rauzy-Poincaré (ARP) that Valérie and I introduced was not performing so well as its second Lyapunov exponent seems to become positive for dimension \(d\geq 6\). I had good expectations for ARP because it reaches the highest value for \(1-\theta_2/\theta_1\) in the computations performed in the Cheat Sheets, thus better than Brun, better than Selmer when \(d=3\).
The algorithm for the computation of the first two Lyapunov exponents was provided to me by Vincent Delecroix. It applies the algorithm \((v,w)\mapsto(M^{-1}v,M^T w)\) millions of times. The evolution of the size of the vector \(v\) gives the first Lyapunov exponent. The evolution of the size of the vector \(w\) gives the second Lyapunov exponent. Since the computation is performed on 64-bits double floating point number, their are numerical issues to deal with. This is why some Gramm Shimdts operation is performed on the vector \(w\) at each time the vectors are renormalized to keep the vector \(w\) orthogonal to \(v\). Otherwise, the numerical errors cumulate and the computed value for the \(\theta_2\) becomes the same as \(\theta_1\). You can look at the algorithm online starting at line 1723 of the file mult_cont_frac_pyx.pyx from my optional package.
I do not know from where Vincent took that algorithm. So, I do not know how exact it is and whether there exits any proof of lower bounds and upper bounds on the computations being performed. What I can say is that it is quite reliable in the sense that is returns the same values over and over again (by that I mean 3 common most significant digits) with any fixed inputs (number of iterations).
Below, I show the code illustrating how to reproduce the results.
The version 0.6 (November 2019) of my package slabbe includes the necessary code to deal with some \(d\)-dimensional Multidimensional Continued Fraction (MCF) algorithms. Its documentation is available online. It is a PIP package, so it can be installed like this:
sage -pip install slabbe
Recall that the dimension \(d\) below is the linear one and \(d-1\) is the dimension of the space for the corresponding projective algorithm.
Import the Brun, Selmer and Arnoux-Rauzy-Poincaré MCF algorithms from the optional package:
sage: from slabbe.mult_cont_frac import Brun, Selmer, ARP
The computation of the first two Lyapunov exponents performed on one single orbit:
sage: Brun(dim=3).lyapunov_exponents(n_iterations=10^7) (0.30473782969922547, -0.11220958022368056, 1.3682167728713919)
The starting point is taken randomly, but the results of the form of a 3-tuple \((\theta_1,\theta_2,1-\theta_2/\theta_1)\) are about the same:
sage: Brun(dim=3).lyapunov_exponents(n_iterations=10^7) (0.30345018206132324, -0.11171509867725296, 1.3681497170915415)
Increasing the dimension \(d\) yields:
sage: Brun(dim=4).lyapunov_exponents(n_iterations=10^7) (0.32639514522732005, -0.07191456560115839, 1.2203297648654456) sage: Brun(dim=5).lyapunov_exponents(n_iterations=10^7) (0.30918877340506756, -0.0463930802132972, 1.1500477514185734)
It performs an orbit of length \(10^7\) in about .5 seconds, of length \(10^8\) in about 5 seconds and of length \(10^9\) in about 50 seconds:
sage: %time Brun(dim=3).lyapunov_exponents(n_iterations=10^7) CPU times: user 540 ms, sys: 0 ns, total: 540 ms Wall time: 539 ms (0.30488799356325225, -0.11234354880132114, 1.3684748208296182) sage: %time Brun(dim=3).lyapunov_exponents(n_iterations=10^8) CPU times: user 5.09 s, sys: 0 ns, total: 5.09 s Wall time: 5.08 s (0.30455473631148755, -0.11217550411862384, 1.3683262505689446) sage: %time Brun(dim=3).lyapunov_exponents(n_iterations=10^9) CPU times: user 51.2 s, sys: 0 ns, total: 51.2 s Wall time: 51.2 s (0.30438755982577026, -0.11211562816821799, 1.368331834035505)
Here, in what follows, I must admit that I needed to do a small fix to my package, so the code below will not work in version 0.6 of my package, I will update my package in the next days in order that the computations below can be reproduced:
sage: from slabbe.lyapunov import lyapunov_comparison_table
For each \(3\leq d\leq 20\), I compute 30 orbits and I show the most significant digits and the standard deviation of the 30 values computed.
For Brun algorithm:
sage: algos = [Brun(d) for d in range(3,21)] sage: %time lyapunov_comparison_table(algos, n_orbits=30, n_iterations=10^7, ncpus=8) CPU times: user 190 ms, sys: 2.8 s, total: 2.99 s Wall time: 6min 31s Algorithm \#Orbits $\theta_1$ (std) $\theta_2$ (std) $1-\theta_2/\theta_1$ (std) +-------------+----------+--------------------+---------------------+-----------------------------+ Brun (d=3) 30 0.3045 (0.00040) -0.1122 (0.00017) 1.3683 (0.00022) Brun (d=4) 30 0.32632 (0.000055) -0.07188 (0.000051) 1.2203 (0.00014) Brun (d=5) 30 0.30919 (0.000032) -0.04647 (0.000041) 1.1503 (0.00013) Brun (d=6) 30 0.28626 (0.000027) -0.03043 (0.000035) 1.1063 (0.00012) Brun (d=7) 30 0.26441 (0.000024) -0.01966 (0.000027) 1.0743 (0.00010) Brun (d=8) 30 0.24504 (0.000027) -0.01207 (0.000024) 1.04926 (0.000096) Brun (d=9) 30 0.22824 (0.000021) -0.00649 (0.000026) 1.0284 (0.00012) Brun (d=10) 30 0.2138 (0.00098) -0.0022 (0.00015) 1.0104 (0.00074) Brun (d=11) 30 0.20085 (0.000015) 0.00106 (0.000022) 0.9947 (0.00011) Brun (d=12) 30 0.18962 (0.000017) 0.00368 (0.000021) 0.9806 (0.00011) Brun (d=13) 30 0.17967 (0.000011) 0.00580 (0.000020) 0.9677 (0.00011) Brun (d=14) 30 0.17077 (0.000011) 0.00755 (0.000021) 0.9558 (0.00012) Brun (d=15) 30 0.16278 (0.000012) 0.00900 (0.000017) 0.9447 (0.00010) Brun (d=16) 30 0.15556 (0.000011) 0.01022 (0.000013) 0.93433 (0.000086) Brun (d=17) 30 0.149002 (9.5e-6) 0.01124 (0.000015) 0.9246 (0.00010) Brun (d=18) 30 0.14303 (0.000010) 0.01211 (0.000019) 0.9153 (0.00014) Brun (d=19) 30 0.13755 (0.000012) 0.01285 (0.000018) 0.9065 (0.00013) Brun (d=20) 30 0.13251 (0.000011) 0.01349 (0.000019) 0.8982 (0.00014)
For Selmer algorithm:
sage: algos = [Selmer(d) for d in range(3,21)] sage: %time lyapunov_comparison_table(algos, n_orbits=30, n_iterations=10^7, ncpus=8) CPU times: user 203 ms, sys: 2.78 s, total: 2.98 s Wall time: 6min 27s Algorithm \#Orbits $\theta_1$ (std) $\theta_2$ (std) $1-\theta_2/\theta_1$ (std) +---------------+----------+--------------------+---------------------+-----------------------------+ Selmer (d=3) 30 0.1827 (0.00041) -0.0707 (0.00017) 1.3871 (0.00029) Selmer (d=4) 30 0.15808 (0.000058) -0.02282 (0.000036) 1.1444 (0.00023) Selmer (d=5) 30 0.13199 (0.000033) 0.00176 (0.000034) 0.9866 (0.00026) Selmer (d=6) 30 0.11205 (0.000017) 0.01595 (0.000036) 0.8577 (0.00031) Selmer (d=7) 30 0.09697 (0.000012) 0.02481 (0.000030) 0.7442 (0.00032) Selmer (d=8) 30 0.085340 (8.5e-6) 0.03041 (0.000032) 0.6437 (0.00036) Selmer (d=9) 30 0.076136 (5.9e-6) 0.03379 (0.000032) 0.5561 (0.00041) Selmer (d=10) 30 0.068690 (5.5e-6) 0.03565 (0.000023) 0.4810 (0.00032) Selmer (d=11) 30 0.062557 (4.4e-6) 0.03646 (0.000021) 0.4172 (0.00031) Selmer (d=12) 30 0.057417 (3.6e-6) 0.03654 (0.000017) 0.3636 (0.00028) Selmer (d=13) 30 0.05305 (0.000011) 0.03615 (0.000018) 0.3186 (0.00032) Selmer (d=14) 30 0.04928 (0.000060) 0.03546 (0.000051) 0.2804 (0.00040) Selmer (d=15) 30 0.046040 (2.0e-6) 0.03462 (0.000013) 0.2482 (0.00027) Selmer (d=16) 30 0.04318 (0.000011) 0.03365 (0.000014) 0.2208 (0.00028) Selmer (d=17) 30 0.040658 (3.3e-6) 0.03263 (0.000013) 0.1974 (0.00030) Selmer (d=18) 30 0.038411 (2.7e-6) 0.031596 (9.8e-6) 0.1774 (0.00022) Selmer (d=19) 30 0.036399 (2.2e-6) 0.030571 (8.0e-6) 0.1601 (0.00019) Selmer (d=20) 30 0.0346 (0.00011) 0.02955 (0.000093) 0.1452 (0.00019)
For Arnoux-Rauzy-Poincaré algorithm:
sage: algos = [ARP(d) for d in range(3,21)] sage: %time lyapunov_comparison_table(algos, n_orbits=30, n_iterations=10^7, ncpus=8) CPU times: user 226 ms, sys: 2.76 s, total: 2.99 s Wall time: 13min 20s Algorithm \#Orbits $\theta_1$ (std) $\theta_2$ (std) $1-\theta_2/\theta_1$ (std) +--------------------------------+----------+--------------------+---------------------+-----------------------------+ Arnoux-Rauzy-Poincar\'e (d=3) 30 0.4428 (0.00056) -0.1722 (0.00025) 1.3888 (0.00016) Arnoux-Rauzy-Poincar\'e (d=4) 30 0.6811 (0.00020) -0.16480 (0.000085) 1.24198 (0.000093) Arnoux-Rauzy-Poincar\'e (d=5) 30 0.7982 (0.00012) -0.0776 (0.00010) 1.0972 (0.00013) Arnoux-Rauzy-Poincar\'e (d=6) 30 0.83563 (0.000091) 0.0475 (0.00010) 0.9432 (0.00012) Arnoux-Rauzy-Poincar\'e (d=7) 30 0.8363 (0.00011) 0.1802 (0.00016) 0.7845 (0.00020) Arnoux-Rauzy-Poincar\'e (d=8) 30 0.8213 (0.00013) 0.3074 (0.00023) 0.6257 (0.00028) Arnoux-Rauzy-Poincar\'e (d=9) 30 0.8030 (0.00012) 0.4205 (0.00017) 0.4763 (0.00022) Arnoux-Rauzy-Poincar\'e (d=10) 30 0.7899 (0.00011) 0.5160 (0.00016) 0.3467 (0.00020) Arnoux-Rauzy-Poincar\'e (d=11) 30 0.7856 (0.00014) 0.5924 (0.00020) 0.2459 (0.00022) Arnoux-Rauzy-Poincar\'e (d=12) 30 0.7883 (0.00010) 0.6497 (0.00012) 0.1759 (0.00014) Arnoux-Rauzy-Poincar\'e (d=13) 30 0.7930 (0.00010) 0.6892 (0.00014) 0.1309 (0.00014) Arnoux-Rauzy-Poincar\'e (d=14) 30 0.7962 (0.00012) 0.7147 (0.00015) 0.10239 (0.000077) Arnoux-Rauzy-Poincar\'e (d=15) 30 0.7974 (0.00012) 0.7309 (0.00014) 0.08340 (0.000074) Arnoux-Rauzy-Poincar\'e (d=16) 30 0.7969 (0.00015) 0.7411 (0.00014) 0.07010 (0.000048) Arnoux-Rauzy-Poincar\'e (d=17) 30 0.7960 (0.00014) 0.7482 (0.00014) 0.06005 (0.000050) Arnoux-Rauzy-Poincar\'e (d=18) 30 0.7952 (0.00013) 0.7537 (0.00014) 0.05218 (0.000046) Arnoux-Rauzy-Poincar\'e (d=19) 30 0.7949 (0.00012) 0.7584 (0.00013) 0.04582 (0.000035) Arnoux-Rauzy-Poincar\'e (d=20) 30 0.7948 (0.00014) 0.7626 (0.00013) 0.04058 (0.000025)
The computation of the figure shown above is done with the code below:
sage: brun_list = [1.3683, 1.2203, 1.1503, 1.1063, 1.0743, 1.04926, 1.0284, 1.0104, 0.9947, 0.9806, 0.9677, 0.9558, 0.9447, 0.93433, 0.9246, 0.9153, 0.9065, 0.8982] sage: selmer_list = [ 1.3871, 1.1444, 0.9866, 0.8577, 0.7442, 0.6437, 0.5561, 0.4810, 0.4172, 0.3636, 0.3186, 0.2804, 0.2482, 0.2208, 0.1974, 0.1774, 0.1601, 0.1452] sage: arp_list = [1.3888, 1.24198, 1.0972, 0.9432, 0.7845, 0.6257, 0.4763, 0.3467, 0.2459, 0.1759, 0.1309, 0.10239, 0.08340, 0.07010, 0.06005, 0.05218, 0.04582, 0.04058] sage: brun_points = list(enumerate(brun_list, start=3)) sage: selmer_points = list(enumerate(selmer_list, start=3)) sage: arp_points = list(enumerate(arp_list, start=3)) sage: G = Graphics() sage: G += plot(1+1/(x-1), x, 3, 20, legend_label='Optimal algo:$1+1/(d-1)$', linestyle='dashed', color='blue', thickness=3) sage: G += line([(3,1), (20,1)], color='black', legend_label='Strong convergence threshold', linestyle='dotted', thickness=2) sage: G += line(brun_points, legend_label='Brun', color='cyan', thickness=3) sage: G += line(selmer_points, legend_label='Selmer', color='green', thickness=3) sage: G += line(arp_points, legend_label='ARP', color='red', thickness=3) sage: G.ymin(0) sage: G.axes_labels(['$d$','']) sage: G.show(title='Computation of first 2 Lyapunov Exponents: comparison of the value $1-\\theta_2/\\theta_1$\n for $d$-dimensional MCF algorithms Brun, Selmer and ARP for $3\\leq d\\leq 20$')
Dans le choix d'un numéro de version pour un package, une bonne source à lire est le Semantic Versioning 2.0.0 que j'ai découvert dans la présentation How to make package managers cry faite par Kenneth Hoste à FOSDEM 2018 mentionnée sur sage-devel. On peut aussi lire le PEP 440 de Python.
Le résumé de Semantic Versioning indique:
Given a version number MAJOR.MINOR.PATCH, increment the: MAJOR version when you make incompatible API changes, MINOR version when you add functionality in a backwards-compatible manner, and PATCH version when you make backwards-compatible bug fixes. Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
La semaine dernière, Jeroen Demeyer a fait une présentation lors de l'Atelier PARI/GP 2019 au sujet de cypari2.
La présentation de Jeroen consistait en des diapositives HTML où les calculs sont faits en direct (avec Jupyter) et où on peut les modifier en direct dans les diapositives. Impressionant! Tout cela grâce au package Python RISE.
Pour installer et utiliser RISE, une extension du Jupyter Notebook pour faire des présentations éditables, il ne suffit pas de l'installer il faut aussi recopier les css au bon endroit. Pour l'installer dans Sage, il suffit de faire:
sage -pip install rise sage -sh jupyter-nbextension install rise --py --sys-prefix
Après on peut consulter ce démo sur youtube et la documentation de RISE est ici.
]]>During the last year, I have written a Python module to deal with Wang tiles containing about 4K lines of code including doctests and documentation.
It can be installed like this:
sage -pip install slabbe
It can be used like this:
sage: from slabbe import WangTileSet sage: tiles = [(2,4,2,1), (2,2,2,0), (1,1,3,1), (1,2,3,2), (3,1,3,3), ....: (0,1,3,1), (0,0,0,1), (3,1,0,2), (0,2,1,2), (1,2,1,4), (3,3,1,2)] sage: T0 = WangTileSet([tuple(str(a) for a in t) for t in tiles]) sage: T0.tikz(ncolumns=11).pdf()
The module on wang tiles contains a class WangTileSolver which contains three reductions of the Wang tiling problem the first using MILP solvers, the second using SAT solvers and the third using Knuth's dancing links.
Here is one example of a tiling found using the dancing links reduction:
sage: %time tiling = T0.solver(10,10).solve(solver='dancing_links') CPU times: user 36 ms, sys: 12 ms, total: 48 ms Wall time: 65.5 ms sage: tiling.tikz().pdf()
All these reductions now allow me to compare the efficiency of various types of solvers restricted to the Wang tiling type of problems. Here is the list of solvers that I often use.
Solver | Description |
---|---|
'Gurobi' | MILP solver |
'GLPK' | MILP solver |
'PPL' | MILP solver |
'LP' | a SAT solver using a reduction to LP |
'cryptominisat' | SAT solver |
'picosat' | SAT solver |
'glucose' | SAT solver |
'dancing_links' | Knuth's algorihm |
In this recent work on the substitutive structure of Jeandel-Rao tilings, I introduced various Wang tile sets \(T_i\) for \(i\in\{0,1,\dots,12\}\). In this blog post, we will concentrate on the 11 Wang tile set \(T_0\) introduced by Jeandel and Rao as well as \(T_2\) containing 20 tiles and \(T_3\) containing 24 tiles.
Tiling a n x n square
The most natural question to ask is to find valid Wang tilings of \(n\times n\) square with given Wang tiles. Below is the time spent by each mentionned solvers to find a valid tiling of a \(n\times n\) square in less than 10 seconds for each of the three wang tile sets \(T_0\), \(T_2\) and \(T_3\).
We remark that MILP solvers are slower. Dancing links can solve 20x20 squares with Jeandel Rao tiles \(T_0\) and SAT solvers are performing very well with Glucose being the best as it can find a 55x55 tiling with Jeandel-Rao tiles \(T_0\) in less than 10 seconds.
Finding all dominoes allowing a surrounding of given radius
One thing that is often needed in my research is to enumerate all horizontal and vertical dominoes that allow a given surrounding radius. This is a difficult question in general as deciding if a given tile set admits a tiling of the infinite plane is undecidable. But in some cases, the information we get from the dominoes admitting a surrounding of radius 1, 2, 3 or 4 is enough to conclude that the tiling can be desubstituted for instance. This is why we need to answer this question as fast as possible.
Below is the comparison in the time taken by each solver to compute all vertical and horizontal dominoes allowing a surrounding of radius 1, 2 and 3 (in less than 1000 seconds for each execution).
What is surprising at first is that the solvers that performed well in the first \(n\times n\) square experience are not the best in the second experiment computing valid dominoes. Dancing links and the MILP solver Gurobi are now the best algorithms to compute all dominoes. They are followed by picosat and cryptominisat and then glucose.
The source code of the above comparisons
The source code of the above comparison can be found in this Jupyter notebook. Note that it depends on the use of Glucose as a Sage optional package (#26361) and on the most recent development version of slabbe optional Sage Package.
]]>I have been working on Jeandel-Rao tiles lately.
Before the conference Model Sets and Aperiodic Order held in Durham UK (Sep 3-7 2018), I thought it would be a good idea to bring some real tiles at the conference. So I first decided of some conventions to represent the above tiles as topologically closed disk basically using the representation of integers in base 1:
With these shapes, I created a 33 x 19 patch. With 3cm on each side, the patch takes 99cm x 57cm just within the capacity of the laser cut machine (1m x 60 cm):
With the help of David Renault from LaBRI, we went at Coh@bit, the FabLab of Bordeaux University and we laser cut two 3mm thick plywood for a total of 1282 Wang tiles. This is the result:
One may recreate the 33 x 19 tiling as follows (note that I am using Cartesian-like coordinates, so the first list data[0] actually is the first column from bottom to top):
sage: data = [[10, 4, 6, 1, 3, 3, 7, 0, 9, 7, 2, 6, 1, 3, 8, 7, 0, 9, 7], ....: [4, 5, 6, 1, 8, 10, 4, 0, 9, 3, 8, 7, 0, 9, 7, 5, 0, 9, 3], ....: [3, 7, 6, 1, 7, 2, 5, 0, 9, 8, 7, 5, 0, 9, 3, 7, 0, 9, 10], ....: [10, 4, 6, 1, 3, 8, 7, 0, 9, 7, 5, 6, 1, 8, 10, 4, 0, 9, 3], ....: [2, 5, 6, 1, 8, 7, 5, 0, 9, 3, 7, 6, 1, 7, 2, 5, 0, 9, 8], ....: [8, 7, 6, 1, 7, 5, 6, 1, 8, 10, 4, 6, 1, 3, 8, 7, 0, 9, 7], ....: [7, 5, 6, 1, 3, 7, 6, 1, 7, 2, 5, 6, 1, 8, 7, 5, 0, 9, 3], ....: [3, 7, 6, 1, 10, 4, 6, 1, 3, 8, 7, 6, 1, 7, 5, 6, 1, 8, 10], ....: [10, 4, 6, 1, 3, 3, 7, 0, 9, 7, 5, 6, 1, 3, 7, 6, 1, 7, 2], ....: [2, 5, 6, 1, 8, 10, 4, 0, 9, 3, 7, 6, 1, 10, 4, 6, 1, 3, 8], ....: [8, 7, 6, 1, 7, 5, 5, 0, 9, 10, 4, 6, 1, 3, 3, 7, 0, 9, 7], ....: [7, 5, 6, 1, 3, 7, 6, 1, 10, 4, 5, 6, 1, 8, 10, 4, 0, 9, 3], ....: [3, 7, 6, 1, 10, 4, 6, 1, 3, 3, 7, 6, 1, 7, 2, 5, 0, 9, 8], ....: [10, 4, 6, 1, 3, 3, 7, 0, 9, 10, 4, 6, 1, 3, 8, 7, 0, 9, 7], ....: [4, 5, 6, 1, 8, 10, 4, 0, 9, 3, 3, 7, 0, 9, 7, 5, 0, 9, 3], ....: [3, 7, 6, 1, 7, 2, 5, 0, 9, 8, 10, 4, 0, 9, 3, 7, 0, 9, 10], ....: [10, 4, 6, 1, 3, 8, 7, 0, 9, 7, 5, 5, 0, 9, 10, 4, 0, 9, 3], ....: [2, 5, 6, 1, 8, 7, 5, 0, 9, 3, 7, 6, 1, 10, 4, 5, 0, 9, 8], ....: [8, 7, 6, 1, 7, 5, 6, 1, 8, 10, 4, 6, 1, 3, 3, 7, 0, 9, 7], ....: [7, 5, 6, 1, 3, 7, 6, 1, 7, 2, 5, 6, 1, 8, 10, 4, 0, 9, 3], ....: [3, 7, 6, 1, 10, 4, 6, 1, 3, 8, 7, 6, 1, 7, 2, 5, 0, 9, 8], ....: [10, 4, 6, 1, 3, 3, 7, 0, 9, 7, 2, 6, 1, 3, 8, 7, 0, 9, 7], ....: [4, 5, 6, 1, 8, 10, 4, 0, 9, 3, 8, 7, 0, 9, 7, 5, 0, 9, 3], ....: [3, 7, 6, 1, 7, 2, 5, 0, 9, 8, 7, 5, 0, 9, 3, 7, 0, 9, 10], ....: [10, 4, 6, 1, 3, 8, 7, 0, 9, 7, 5, 6, 1, 8, 10, 4, 0, 9, 3], ....: [3, 3, 7, 0, 9, 7, 5, 0, 9, 3, 7, 6, 1, 7, 2, 5, 0, 9, 8], ....: [8, 10, 4, 0, 9, 3, 7, 0, 9, 10, 4, 6, 1, 3, 8, 7, 0, 9, 7], ....: [7, 5, 5, 0, 9, 10, 4, 0, 9, 3, 3, 7, 0, 9, 7, 5, 0, 9, 3], ....: [3, 7, 6, 1, 10, 4, 5, 0, 9, 8, 10, 4, 0, 9, 3, 7, 0, 9, 10], ....: [10, 4, 6, 1, 3, 3, 7, 0, 9, 7, 5, 5, 0, 9, 10, 4, 0, 9, 3], ....: [2, 5, 6, 1, 8, 10, 4, 0, 9, 3, 7, 6, 1, 10, 4, 5, 0, 9, 8], ....: [8, 7, 6, 1, 7, 5, 5, 0, 9, 10, 4, 6, 1, 3, 3, 7, 0, 9, 7], ....: [7, 5, 6, 1, 3, 7, 6, 1, 10, 4, 5, 6, 1, 8, 10, 4, 0, 9, 3]]
The above patch have been chosen among 1000 other randomly generated as the closest to the asymptotic frequencies of the tiles in Jeandel-Rao tilings (or at least in the minimal subshift that I describe in the preprint):
sage: from collections import Counter sage: c = Counter(flatten(data)) sage: tile_count = [c[i] for i in range(11)]
The asymptotic frequencies:
sage: phi = golden_ratio.n() sage: Linv = [2*phi + 6, 2*phi + 6, 18*phi + 10, 2*phi + 6, 8*phi + 2, ....: 5*phi + 4, 2*phi + 6, 12/5*phi + 14/5, 8*phi + 2, ....: 2*phi + 6, 8*phi + 2] sage: perfect_proportions = vector([1/a for a in Linv])
Comparison of the number of tiles of each type with the expected frequency:
sage: header_row = ['tile id', 'Asymptotic frequency', 'Expected nb of copies', ....: 'Nb copies in the 33x19 patch'] sage: columns = [range(11), perfect_proportions, vector(perfect_proportions)*33*19, tile_count] sage: table(columns=columns, header_row=header_row) tile id Asymptotic frequency Expected nb of copies Nb copies in the 33x19 patch +---------+----------------------+-----------------------+------------------------------+ 0 0.108271182329550 67.8860313206280 67 1 0.108271182329550 67.8860313206280 65 2 0.0255593590340479 16.0257181143480 16 3 0.108271182329550 67.8860313206280 71 4 0.0669152706817991 41.9558747174880 42 5 0.0827118232955023 51.8603132062800 51 6 0.108271182329550 67.8860313206280 65 7 0.149627093977301 93.8161879237680 95 8 0.0669152706817991 41.9558747174880 44 9 0.108271182329550 67.8860313206280 67 10 0.0669152706817991 41.9558747174880 44
I brought the \(33\times19=641\) tiles at the conference and offered to the first 7 persons to find a \(7\times 7\) tiling the opportunity to keep the 49 tiles they used. 49 is a good number since the frequency of the lowest tile (with id 2) is about 2% which allows to have at least one copy of each tile in a subset of 49 tiles allowing a solution.
A natural question to ask is how many such \(7\times 7\) tilings does there exist? With ticket #25125 that was merged in Sage 8.3 this Spring, it is possible to enumerate and count solutions in parallel with Knuth dancing links algorithm. After the installation of the Sage Optional package slabbe (sage -pip install slabbe), one may compute that there are 152244 solutions.
sage: from slabbe import WangTileSet sage: tiles = [(2,4,2,1), (2,2,2,0), (1,1,3,1), (1,2,3,2), (3,1,3,3), ....: (0,1,3,1), (0,0,0,1), (3,1,0,2), (0,2,1,2), (1,2,1,4), (3,3,1,2)] sage: T0 = WangTileSet(tiles) sage: T0_solver = T0.solver(7,7) sage: %time T0_solver.number_of_solutions(ncpus=8) CPU times: user 16 ms, sys: 82.3 ms, total: 98.3 ms Wall time: 388 ms 152244
One may also get the list of all solutions and print one of them:
sage: %time L = T0_solver.all_solutions(); print(len(L)) 152244 CPU times: user 6.46 s, sys: 344 ms, total: 6.8 s Wall time: 6.82 s sage: L[0] A wang tiling of a 7 x 7 rectangle sage: L[0].table() # warning: the output is in Cartesian-like coordinates [[1, 8, 10, 4, 5, 0, 9], [1, 7, 2, 5, 6, 1, 8], [1, 3, 8, 7, 6, 1, 7], [0, 9, 7, 5, 6, 1, 3], [0, 9, 3, 7, 6, 1, 8], [1, 8, 10, 4, 6, 1, 7], [1, 7, 2, 2, 6, 1, 3]]
This is the number of distinct sets of 49 tiles which admits a 7x7 solution:
sage: from collections import Counter sage: def count_tiles(tiling): ....: C = Counter(flatten(tiling.table())) ....: return tuple(C.get(a,0) for a in range(11)) sage: Lfreq = map(count_tiles, L) sage: Lfreq_count = Counter(Lfreq) sage: len(Lfreq_count) 83258
Number of other solutions with the same set of 49 tiles:
sage: Counter(Lfreq_count.values()) Counter({1: 49076, 2: 19849, 3: 6313, 4: 3664, 6: 1410, 5: 1341, 7: 705, 8: 293, 9: 159, 14: 116, 10: 104, 12: 97, 18: 44, 11: 26, 15: 24, 13: 10, 17: 8, 22: 6, 32: 6, 16: 3, 28: 2, 19: 1, 21: 1})
How the number of \(k\times k\)-solutions grows for k from 0 to 9:
sage: [T0.solver(k,k).number_of_solutions() for k in range(10)] [0, 11, 85, 444, 1723, 9172, 50638, 152244, 262019, 1641695]
Unfortunately, most of those \(k\times k\)-solutions are not extendable to a tiling of the whole plane. Indeed the number of \(k\times k\) patches in the language of the minimal aperiodic subshift that I am able to describe and which is a proper subset of Jeandel-Rao tilings seems, according to some heuristic, to be something like:
[1, 11, 49, 108, 184, 268, 367, 483]
I do not share my (ugly) code for this computation yet, as I will rather share clean code soon when times come. So among the 152244 about only 483 (0.32%) of them are prolongable into a uniformly recurrent tiling of the plane.
]]>I tried to install Ubuntu 18.04 on a Aspire ES 11 ES1-132-C6LG and got troubles because I get the screen "No Bootable Device found" after installation. The first help I found on Google is this posts on itsfoss.com but at step 3, the option "Select an UEFI file as trusted for executing" is unavailable so the proposed solution does not work for me.
Some comments on the same post shows that I am not alone with this problem:
Alex 2 months ago I cannot “Select an UEFI file”… Maybe Windows 10 configured my BIOS to reject any other drive (SSD) that has not Windows on it? Tom 1 month ago No, you need to install ubuntu without bootloader, mount the partitions after the installation, modprobe efivars and then install grub-efi
After looking at what grub-uefi and modprobe mean, I found this discussion (April 2017, Ubuntu 17.04) and this discussion (Dec. 2016) on askubuntu and this discussion (Dec 2016) on community acer very close to my problem. In the end, the most simple (no need to modprobe anything or install grub-efi) and usefull one (it worked) was this discussion (Jan 2017) on community acer especially the two posts by Spektro37 (on page 1, the second on page 2) that I recopy and adapt a little bit below.
Install Ubuntu;
Boot using Ubuntu USB and select "Try without installing";
Launch Gparted to get the EFI partition address. In my case it's /dev/mmcblk0p1;
Open the Terminal (ctrl + alt + T);
In the Terminal execute the following to create a directory in the media folder and to mount the EFI partition to that folder:
sudo mkdir /media/EFI sudo mount {replace_this_with_the_address_from_step_3} /media/EFI
In my case it would look like:
sudo mkdir /media/EFI sudo mount /dev/mmcblk0p1 /media/EFI
Create the /EFI/Linux/ folder:
sudo mkdir /media/EFI/EFI/Linux
Copy all the existing files from the folder that was created during Ubuntu instalation. In my case, the default folder was /EFI/ubuntu/:
sudo ls /media/EFI/EFI/ubuntu BOOTX64.CSV fw fwupx64.efi grub.cfg grubx64.efi mmx64.efi shimx64.efi
To do so, you can use the following command:
sudo cp -R /media/EFI/EFI/ubuntu/* /media/EFI/EFI/Linux/
Copy the BOOTX64.EFI from the BOOT directory to the Linux folder (without it, I confirm it didn't work):
sudo cp /media/EFI/EFI/BOOT/BOOTX64.EFI /media/EFI/EFI/Linux
According to Spectro37, /EFI/Linux/BOOTX64.efi is an hardcoded path for Linux.
]]>Six ans après avoir utilisé et entamé la traduction de WebWork en français dans un cours que j'avais donné à l'Université du Québec à Montréal, je viens de voir sur internet que WebWork est maintenant utilisé à grande échelle en français au Québec.
Le projet Développement de WeBWorK pour le réseau des cégeps francophones a porté ses fruits. Un serveur du Centre collégial de développement de matériel didactique (CCDMD), donne facilement accès à un compte sur la plateforme WeBWorK. Selon le site, plus de 3000 questions sont maintenant traduites en français dans la Banque de problèmes libres (BPL) en français dans un répertoire sur github. De plus, des ressources en ligne sont disponibles notamment sur ce sujet sur le site de MathemaTIC.
Félicitations à toutes celles et ceux qui ont été impliqués dans la poursuite du projet dans les 6 dernières années.
]]>Si vous passez par Lac-Mégantic cette semaine, il faut faire un arrêt chez Mange ta main: ils servent de la purée de pomme des Labbé:)
]]>As the 2017 American Ultimate Disc League (AUDL) Championship starts in Montreal today, I thought it would be a nice timing to show an aspect that makes Montreal (and more generally Quebec and Canada) so exceptional concerning Ultimate Frisbee.
Remark: all of the data and computations made for this post are available as a Jupyter Notebook in a github repository.
The World Health Organization website has a lot of data on the density of physicians (counted as total number per 1000 population). Some coutry have a lot of physicians: as of today, Cuba has 7.519 physicians for each 1000 population while Canada has 2.477, USA has 2.554 and France has 3.227 physicians for each 1000 population. Some other country have a few like Mali that I visited in 2006 which has only 0.085 physicias per 1000 population.
I propose now to do the same for Ultimate Frisbee players as WHO does for physicians. For this we may use the data that was made public by the WFDF in 2014 before deciding how many bids where given to each country to compete in the World Club Ultimate Championships that took place in 2014 in Lecco. And we may get the population by country in 2014 from the World Bank website.
Lot of countries have very few ultimate players. For example, there are only 13 countries having at least 1000 players: United States, Canada, Australia, Germany, Japan, United Kingdom, France, Austria, Colombia, Norway, Netherlands, Philippines, Belgium. And only 22 countries having at least 500 players.
Above all, it is the density of people playing ultimate that makes Canada so exceptional. The next graphics including countries with at least 500 players declared to WFDF in 2014 says it all:
For every 1000 population, there is a ultimate player. Since everybody knows something like 1000 people. This means that in Canada everybody knows someone which plays ultimate. This makes a big difference in the country in terms of recognition. People do not ask if you play with a dog on the beach anymore in Canada when you say you play ultimate. They know this sport! In comparison, in France there is 1 ultimate player for each 24664 population. You got to be lucky to know one.
So many people play ultimate in Canada that there are almost as many players as in the United States:
The above graphics explains something more: it is much much more difficult to represent your country at WFDF competitions when you come from USA or Canada.
Note that these computations were done based on the number of active ultimate frisbee players declared to WFDF. It is possible that some countries do not declare all of them for some reasons. For example, USA Ultimate declares to WFDF only its own members which are in general people taking part in USA Ultimate competitions.
Canada is among the most egalitarian countries in terms of women playing ultimate. For each 7 ultimate players on the line in Canada, 2.800 are female. Among countries with at least 500 declared active players in 2014, only Philippines (2.996) and New Zealand (2.827) are more egalitarian.
In other countries, like France where only 1.771 female play ultimate for each 7 players, it is just impossible to get your club to play in the mixed division. This is also why we find much more women playing in the open division in Europe than in America up to a certain level of play.
The density of women playing ultimate in Canada is just exceptional:
In absolute value, this makes Canada the second country with the most female active players in 2014, just behind USA, with a lot of advance over the next countries:
In 2017, according to this page of the Quebec Ultimate Federation there are 6908 registered ultimate player in Quebec province of Canada. With 8.18 millions of population, this makes a density of 0.844 ultimate player in Quebec province for every 1000 of population.
Most of the people playing ultimate are in the big cities (Montréal and Québec city). There are 3454 active ultimate frisbee players in Montreal. If Montreal was a country, it would be ranked 7th for the total number of active player in the 2014 list of WFDF just behind Germany (3632), Japan (3621) and United Kingdom (3621).
Note that in the above graphic, Capitale-Nationale is the name of the region of Quebec city and Estrie is the name of the region of Sherbrooke city. Should I recall that Quebec city is hosting every year the Mars Attaque tournament which is without doubt the largest indoor ultimate tournament in the world with 120 teams gathering more than 1000 players (see this video of the most recent edition). Also team ONYX which finished 2nd in the mixed division during the Prague 2010 World Club Ultimate Championships was from Quebec city.
This gives even bigger densities if we look at densities per region. In the Montreal administrative region, there are 1.727 ultimate frisbee players for each 1000 population. Two other regions are above 2.000 including the region of Quebec city.
The data for the population in each Quebec region was taken from the associated wikipedia page about Quebec administrative regions.
Note that there is a difficulty in counting the density for regions that are near a big city (Longueil and Laval are near of Montreal, Chaudière-Appalaches is a region south of Quebec city, and Outaouais is the Quebec region that is near to Ottawa). The low density in these regions might be explained by the fact that people drive to the big city to play ultimate.
There are a lot of ultimate frisbee players in Ontario (including Ottawa and Toronto cities) and in British Columbia. Here is the number of ultimate frisbee players per provinces in each of the years 2014, 2015 and 2016 according to the 2016 Ultimate Canada Annual Report that was given to me by Danny Saunders (executive director of Ultimate Canada). I do not know if this annual report can be found online.
Using data on the population by Canada provinces and territories found on this wikipedia page, we can compute the density of ultimate frisbee players in each province. We discover that, if there is a high density of ultimate frisbee players in Quebec, it is even higher in the other provinces.
The province of Manitoba is the province with the highest rate with more than 3.5 players per 1000 population.
Below I am adding a graphic on the density of people playing ultimate in some cities I know. Cities in Quebec are much more dense in ultimate frisbee players than cities in Europe amongst this short list.
I am missing data to do more of those graphics. How many frisbee players are there in every states of the USA? in every province of Canada? in every city of the world? how many female? If you can help me gather this data, do not hesitate to contact me or add a comment below.
Compiling sage takes a while and does a lot of stuff. Each time I am wondering which components takes so much time and which are fast. I wrote a module in my slabbe version 0.3b2 package available on PyPI to figure this out.
This is after compiling 7.5.beta6 after an upgrade from 7.5.beta4:
sage: from slabbe.analyze_sage_build import draw_sage_build sage: draw_sage_build().pdf()
From scratch from a fresh git clone of 7.5.beta6, after running MAKE='make -j4' make ptestlong, I get:
sage: from slabbe.analyze_sage_build import draw_sage_build sage: draw_sage_build().pdf()
The picture does not include the start and ptestlong because there was an error compiling the documentation.
By default, draw_sage_build considers all of the logs files in logs/pkgs but options are available to consider only log files created in a given interval of time. See draw_sage_build? for more info.
]]>