The key here is to understand that the segmentation problem has been translated into a pixel classification problem. In other words, the plugin assigns a class label to each pixel of the input image, so all pixels with the same label will be part of the same “category”. It is the user task to decide which classes to use depending on his/her specific problem. For instance, if you want to separate some cells from the background, you would select two classes: one for the cell pixels and one for the background pixels.
The second task of the user is to provide some samples (what I called before “traces” since they are usually introduced as freehand ROIs) of pixels of each class. To follow with my previous example, you would select some cell pixels and add them to the cell class, and some background pixels and add them to the background class. Those will be what we call the “training samples”, because the plugin will then train a classifier on how to differentiate best those type of pixels. Once the classifier is trained, it will be applied to all pixels in the image to produce the final label image.
Each pixel is represented by not only its intensity but a set of image features that contain information about texture, borders, color, hue, etc. The image features to use are selected by clicking on the Settings button.
I hope this helps you clarifying how the plugin works!