A global shipping and supply chain company aimed to apply machine learning to improve how packages were loaded and processed inside their delivery trucks. By using computer vision to analyze truck interiors and determine package fill levels, they could optimize space usage, increase shipment capacity, and improve the trailer unloading process.
However, the data science team faced a common bottleneck, they lacked the ability to generate high-quality labeled image data at scale. Without a centralized platform to visualize unstructured images, it was difficult to prioritize what to annotate, identify edge cases, or eliminate duplicates. They also lacked in-house resources to manage large-scale image labeling.
To address this, the team adopted Databrewery as their end-to-end training data platform. Using Databrewery, they launched object detection and bounding box projects focused on truck interiors. With the ability to filter and sort by metadata such as camera number, date and time, and trailer unloading process, they quickly organized and annotated the most relevant data.
They also partnered with Databrewery Boost to preprocess and accelerate the annotation pipeline. By labeling a small set of initial images tied to a specific unloading process, and applying those labels across hundreds of thousands of similar images, the team built an automation workflow that significantly scaled operations. This approach led to a 50% reduction in time and labeling costs, while eliminating thousands of hours of manual work.
Now, with production-ready models in place, the team is expanding into new computer vision use cases — including robotics for package sorting and ML-powered systems to automate manual auditing tasks across their logistics network.