text = [md`
Self-Organizing Maps (SOMs) are a machine-learning techinique introduced in 1980 by by the Finnish professor Teuvo Kohonen.
A *map* is made of an arbitrary numer of cells (aka units) usually arranged in a square or hexagonal grid. You can see an example of such grid to the right.
This is a grid with ${rows} rows and ${columns} columns,
for a total of ${size} cells.
You can use the following slider to change its size.
${columns_slider}
`,md`
The color of each cell depend on its *weights*. Each cell "activates" an *input signal* and according to its *weights*.
For example lets have a random ${html`<span style="color:${input_color};text-shadow: -1px 0 black, 0 1px black, 1px 0 black, 0 -1px black;">color<span>`} as input.
You can see how the grid activates in response to an input by enabling the scaling by similarity:
${viewof scale_by_similarity_toggle_button}
As you can see the cells with the weight color closer to the input one activate more.
Feel free to try out different colors:
${viewof color_picker_button}
${viewof W_button}
`,md`
In particular we are interested in finding the cell that activates the most in response of the ${html`<span style="color:${input_color};text-shadow: -1px 0 black, 0 1px black, 1px 0 black, 0 -1px black;">current input<span>`}.
Found it?
No worries you can just press this button:
${viewof highlight_bmu_toggle_button}
Once we have found the best matching unit (BMU) for an input we can compute its distance from any other cell.
${viewof show_distances_toggle_button}
`,md`
Every time we feed a new input to the map, the weights cells within a certain *range* from the BMU are updated.
${viewof show_range_toggle_button}
By changing the range you can see how the neighbors of the bmu are included based on the distances.
${range_slider}
`,md`
All the selected cells are now going to be changed to be "closer" to the new input.
You can use the following button to switch between the current values and the values after the update.
${viewof show_updated_weights_toggle_button}
The "strength" of this update can be controlled by the learning rate.
${learning_rate_slider}
To achieve a better result, the learning rate is usually adjusted dynamically during the self-organization process.
`,md`
If you press this button, the update will be applied and a new input will be generated.
To save some time it will step 10 times in one click.
${viewof next_10_steps_button}
Try now to get the map to become ordered by adjusting the learning rate and the update range.
The goal is to have a smooth color transition while not losing any RGA color completely. Something like a genric color picker.
If you want to start over with the same starting weights you can press the reset button below.
${viewof weights_button}
`,md`
**TIP**: to get a better view of the current state of the map, you may want to hide away all the visual elements/modifications we have introduced while explaining the functioning of the learning algorithm. That can be easily done by going back to the previous pages and toggle the respective buttons (e.g. bmu highligth, by-similarity scaling).
`,md`
You may have realised that to achieve the best result we want to **start with a low learning rate and an high range**, to then **gradually change to a shorter range and and highier learning rate**.
This is just a very basic introduction to an old, yet still quite popular dimentionality reduction/data visualization technique. Many implementation datails and other variants of this algorithm can be found online and in the literature.
My hope is for this playground to be a quick way to get an idea about the principle of self-organization. I hope you have learned something interesting to you.
Thank you for sticking around!
`,md`
Fin.
`]