Towards human-robot collaboration in slaughterhouses
Currently, there are many operations that are manually performed in slaughterhouses. Simple systematic tasks that require the time and effort of operators that could be using their skills more efficiently.
The task of the RK-project:
The task in this project was to implement a software control system with an advanced vision algorithm for product detection on a flexible human friendly robotics platform.
The process consists of a generic pick and place task where a pallet/box filled with a meat product of approximately 2kg/piece is emptied. The pipeline goes as follows; a box arrives at the working station, each item is detected and moved to a different location for the next step in the production line.
The challenges for the human friendly robot
- Identifying the location of the meat products
- The lighting of the room
- The particular type of work surface
- The state of the meat product which can have large variations in color, texture, shape and dimension.
The core of the vision system developed in this project consists of state-of-art, deep learning algorithms re-purposed to find meat products.
A key component to reaching high precision lies in the data-preparation for training the algorithm. The data volume was artificially increased using custom designed data augmentation techniques where both the appearance of the products and the background were varied during training.
While this RK-project demonstrates the feasibility of implementing a flexible, robust, safe and adaptive control system which can solve pick and place tasks in a production line, we should also realize that this is only the beginning. In the next few years, we will likely see intelligent robotics platforms performing tasks in the slaughterhouses with increasing levels of complexity, beyond pick and place.
The platforms will be able to collaborate with and learn from skilled operators. As they become more intelligent, they will even be able to learn from their own experience.
The future belongs to humans and robots in collaboration – it is inevitable.
Robot performing dynamic pick and place task (speed x4)
Vision system detecting the different items in a livestream of images (speed x4). The center and orientation of the selected piece are represented by a blue dot and a red arrow.