This work aims to present an urban segmentation to acquire road signs descriptions and annotations. The process implies geometrical characteristics from 3D points clouds (like dimensions, and shape), and visual characteristics from image data (like color wear, and damage) computation. We handle visual and spatial information of the road signs individually to fusion through GPS data in future work. The process for obtaining spatial information from 3D point clouds includes: (i) object segmentation through 3D point cloud density, (ii) use of the retro-reflective attribute of the material to differentiate possible road signs, (iii) plane orientation determination via singular value decomposition, (iv) 2D point cloud projection to geometric shape estimation. The process for getting visual information from images comprises: (i) color segmentation of the road signs in two-parts: border-color and inside-color, (ii) color identification using HSV color model (iii) geometric shape association via contour comparison, (iv) local features extraction and description from semantic data as numbers, characters, and drawings. We chose to work with low rise road signs because the sensors for mobile laser scanning has an elevation angle that delimits the acquisition. We select an experimentation ground truth from the KITTI data set to prove an adequate visual and spatial segmentation.