ich nutze Google Colab, um Objekte auf Bildern zu erkennen und die Lokalisierungsinformationen daraus zu verwerten. Die Ortsinformationen erhalte ich als 2D-Boolsche Maske im Format 1024 x 1024. Diese Maske vergrößere ich darauf. Ergibt gerade beim Lesen ohne Kontext wahrscheinlich wenig Sinn, eine wirkliche Erläuterung wäre aber unnötig viel.
Die besagte Funktion ist in unten stehendem Code implementiert und funktioniert - hier benötige ich keine Hilfe.
Leider scheint der Code aber sehr ineffizient zu sein. In Colab crasht die Session, wenn 25 GB Ram aufgebraucht sind, was hier schon mal passiert, wenn er bereits vorbelastet ist. Kann mir jemand sagen, wie ich das ganze effizienter gestalten kann?
Beste Grüße
Code: Alles auswählen
# function to resize a defect mask
def resize_mask(_xyz_resized_contour_coordinates_x, _xyz_resized_contour_coordinates_y, _dimension_x, _dimension_y):
# create a 2D array of zeros with dimensions of x and y of the xyz-file. this will be filled with the resized mask
_xyz_resized_mask = np.zeros(((_dimension_y), (_dimension_x)))
# fill the empty, resized mask with 1's with the resized contour (still with holes (empty rows etc. ) through the resizing)
_xyz_resized_mask[_xyz_resized_contour_coordinates_y, _xyz_resized_contour_coordinates_x] = 1
# loop to fill the resized mask with 1
# We're iterating from the first row where the contour begins to the last row
# In every loop pass we check, if we have an empty row by counting the non-zeros in the row.
# If we don't have an empty row, we extract the index of the minimal and maximal column in this row that is occupied with a 1. This is the new contour.
# If we have an empty row, we use the contour coordinates of the prior row.
# After this check we set all values in the xyz_resized_mask array between the just determined contours to 1
# The result is a fully filled resized mask according to the xyz-file dimensions.
_min_row, _max_row = min(_xyz_resized_contour_coordinates_y), max(_xyz_resized_contour_coordinates_x)
_row_delta = _max_row - _min_row
_row_list = [_min_row + i for i in range((_row_delta + 1))]
for _row_index in _row_list:
_non_zeros_in_this_row = np.count_nonzero(_xyz_resized_mask[_row_index][:])
if _non_zeros_in_this_row > 0:
_non_zero_cols = (np.where(_xyz_resized_mask[_row_index][:] == 1))[0]
_min_non_zero_col = min(_non_zero_cols)
_max_non_zero_col = max(_non_zero_cols)
else:
pass
_xyz_resized_mask[_row_index][_min_non_zero_col : _max_non_zero_col + 1] = 1
return _xyz_resized_mask
# create a list to store xyz_resized_mask in it
list_of_xyz_resized_masks = []
# loop over each defect by its id
for id in results['defect_ids']:
percentual_progress = round(100 * (id / (len(results['defect_ids']))), 2)
print("{} %".format(percentual_progress))
# call the function from above and append its return to the list_of_xyz_resized_masks
list_of_xyz_resized_masks.append(resize_mask(_percentual_contour_coordinates_x = results['percentual_contour_coordinates_x'][id],
_percentual_contour_coordinates_y = results['percentual_contour_coordinates_y'][id],
_dimension_x = xyz_dimension_x,
_dimension_y = xyz_dimension_y
)
# add the list_of_xyz_resized_masks to the results-dict
results['xyz_resized_masks'] = list_of_xyz_resized_masks