You do not need to trace/label the entire structure that you want to have classified. In order to train a classifier for Trainable Weka Segmentation, it’s sufficient to give just a “few” representative example pixels. You can use the 3D stack to determine on your own whether the structure is spherical or tubular, then draw a (2D) ROI somewhere in it and add it to the respective class. When you train the classifier, the plugin will calculate the 3D image features per voxel and deduce (learn) from your traced labels as examples how to classify all the other voxels in the image.
It’s a common misconception that you’d need a “full” ground truth for training the classifier. Just start with a few sparse labels, and refine them after an initial training where you see the prediction is still wrong.
Does that make sense?