This paper compares two approaches for semantic segmentation of breast tumors on ultrasound. The first approach, called conventional, follows the typical pattern classification process to extract hand-crafted features, followed by pixel classification with a Multilayer Perceptron (MLP) network. The second approach, called convolutional, uses a Convolutional Neural Network (CNN) to learn features automatically. For evaluating both approaches, a breast ultrasound dataset with 1200 images is considered. Experimental results reveal that the CNNs called VGG16 and ResNet50 outperformed the conventional approach in various segmentation quality indices. Thus, extracting hand-crafted discriminant features is challenging since it depends on the problem domain and the designer’s skills. On the other hand, through transfer learning, it is possible to adjust a pre-trained CNN for addressing the problem of tumor segmentation satisfactorily. This performance is because CNN learns general features in its first layers, and more subtle features are activated as depth increases.