Dataset
The garden atmosphere object dataset was made to coach the algorithm. The unique static obstacles within the garden atmosphere primarily comprise trunks and spherical bushes, whereas the dynamic obstacles primarily comprise individuals. There exists no publicly obtainable dataset that meets the necessities of this paper. Subsequently, to confirm the optimized algorithm community, it’s essential to create the dataset. The developed dataset has three foremost classes, particularly, trunk, spherical tree, and individual. Trunks and spherical bushes have been taken from the garden atmosphere area capturing, a complete of 8059, together with 7922 trunk samples and 567 spherical tree samples, and the image measurement was 564 × 422 × 3. Trunk and spherical tree are proven in Fig. 5.

Trunk and spherical tree in dataset.
The individual dataset was derived from all the photographs within the PASCAL VOC 2007 overlaying the individual class, with 4012 photos. All the dataset has 12,071 photos. To speed up the community convergence pace and forestall gradient explosion, the labeled information are normalized, and the format of the normalized labeled information is
$$ start{array}{*{20}l} {class_id} hfill & x hfill & y hfill & w hfill & h hfill finish{array} , $$
the place, class_id is the item class, trunk is 0, spherical tree is 1, and individual is 2. x and y are the coordinates of the middle level, and w and h denote the width and peak of the normalized object bounding field, respectively.
$$ x = left( {x_{max } + x_{min } } proper)/2u, $$
(6)
$$ y = (y_{max } + y_{min } )/2v, $$
(7)
$$ w = (x_{max } – x_{min } )/u, $$
(8)
$$ h = (y_{max } – y_{min } )/v, $$
(9)
the place, xmin and ymin are the coordinates of the higher left nook of the goal bounding field, xmax and ymax are the coordinates of the decrease proper nook of the item bounding field, and u and v are the width and peak of the image, respectively, as proven in Fig. 6.

Annotation parameters of an object within the coordinates.
The dataset was randomly divided, and the variety of photographs within the coaching set, validation set and testing set are 7727, 1932 and 2412, respectively. The anchor packing containers are clustered utilizing the k-means++ algorithm, an extension of k-means23, and the sizes are (12, 48), (30, 99), (49, 206), (110, 149), (94, 331), and (243, 329).
Algorithm coaching
By way of the community coaching platform, the working system is Home windows 10, the CPU is an Intel i7-9700KF with 3.6 GHz clock pace, the reminiscence measurement is 16 GB, the GPU is a NVIDIA GEFORCE RTX 2070 Tremendous with 8 GB reminiscence measurement, the deep studying framework is AlexeyAB-Darknet, and the compilation atmosphere is Visible Studio 2015 with C/C++ language.
The full variety of coaching iterations of the Optimized tiny YOLOv3 algorithm was 25,000, the preliminary studying fee was set to 0.00261, and when it was skilled to fifteen,000 rounds and 25,000 rounds, studying fee was diminished to 10%. The decay worth was set to 0.0005. Through the coaching course of, the photographs have been rotated and the hue and saturation have been modified to stop overfitting. The loss operate makes use of CIoU, and the kind of non-maximum suppression is greedy-NMS (grasping non most suppression). The Optimized tiny YOLOv3 algorithm is in contrast with unique tiny YOLOv3, Improved tiny YOLOv3 and TF-YOLO by way of loss and mAP (imply common precision) on the garden atmosphere object dataset.
The curve of the loss operate worth and the variety of coaching iterations of every algorithm through the coaching course of is proven in Fig. 7. The pink line within the determine is the place the loss operate worth is 0.5, and the loss operate worth under 0.5 may be thought of to have good detection efficiency.

The upper the relative peak of the pink line from the underside horizontal axis, the higher the convergence of the algorithm. In Fig. 7, TF-YOLO has the very best convergence, adopted by Optimized tiny YOLOv3. In contrast with unique tiny YOLOv3, the algorithm proposed on this paper has a big enchancment in convergence.
Through the coaching course of, the mAP curve is proven within the Fig. 8.

In Fig. 8, Optimized tiny YOLOv3 has the very best mAP worth on the validation set in comparison with all different algorithms, and has one of the best coaching impact on the coaching set.
https://www.nature.com/articles/s41598-022-19519-4