Deep Learning data processing pipeline VdLMV-DLVdLMV-DL, our new Deep Learning data processing pipeline, aims to get the best performance out of small datasets and, by using semi-automatic annotation, minimise the effort of annotating datasets.
Commonly open-source deep learning models on GitHub have optimized training algorithms that are specific tuned to perform at best on de facto benchmark datasets like the COCO dataset with hundreds of thousands of small images e.g. 640x640 pixels.
In general, our clients have only small datasets with hundreds of samples and with much higher resolution. This requires a different approach for optimised training.
Highlights of VdLMV-DL:
- Integration with the open-source annotation tool CVAT for semi-automatic annotation. Models trained with VdLMV-DL on a small subset of the dataset can be used to generate pre-annotations for the large subset of the dataset. In this way clients have only to check and correct the results of pre-annotations instead of annotating all data from scratch. This will reduce the time and cost for labelling the dataset.
- Dynamic balancing of datasets during training.
- Hierarchical tiling method. Both training and inference can be on very high-resolution images.
- Hyper-parameter tuning using distributed genetic algorithm.