By Eric Psota, Ty Schmidt, Benny Mote and Lance Perez
Modern livestock operations contain thousands of group-housed animals and are designed for efficiency. In many cases, caretakers are responsible for observing thousands of animals, making periodic short-term observations of activities and behaviors in order to infer each animal’s health status.
At the University of Nebraska-Lincoln, Benny Mote and Ty Schmidt from the Department of Animal Science have teamed up with Eric Psota and Lance C. Pérez in the Department of Electrical & Computer Engineering to develop a solution that makes it possible to continuously and objectively observe animal activities and behaviors on an individual basis. The solution relies solely on computer vision and image processing to track animals from video feedback provided by consumer-grade security camera footage. This solution, unlike the majority of methods being developed, does not require animals to be equipped with specialized tracking hardware. Furthermore, it does not require modification to the pens themselves. In this way, it is much less evasive and more cost effective than, for example, radio frequency identification (RFID) or ultra-wideband (UWB) approaches.
One of the most critically important components to the visual tracking method is its ability to detect the location and orientation of individual animals in each frame of the video. Detections in individual frames, when linked between frames, makes it possible to achieve detailed tracking results on a per-animal basis. However, typical behaviors of group-housed pigs make tracking particularly difficult. They often fight, walk over each other, and pile up on top of one another to preserve warmth.
To overcome these challenges, the method uses deep learning to detect pig parts and associate the parts with one another to form whole instances. The method uses a fully convolutional pixel-to-pixel hourglass network to convert an input image into pixel encodings that encapsulate both the location of body parts and a way of joining them together. Figure 1 illustrates the encoding for a pair of pigs. Both the network and the associated method for joining parts together are illustrated in Figure 2.
For training the deep learning method, a comprehensive data set was created that contains human annotated pig locations in 2000 images across 17 different locations including both research and industry facilities. Figure 3 illustrates sample images from each location.
After breaking the data set into a training images and test images, the method was evaluated and shown to be capable of achieving 98 percent accuracy. The annotated dataset is the largest of its kind and it is publicly available for researchers to develop and evaluate their own methods.
Pigs and many other livestock animals are visually homogeneous. Consistency is something that producers strive for, but it introduces considerable challenges to visual tracking systems. Traditional multi-object tracking methods rely on visual distinctions between targets in order to re-identify them when visual detection is lost, or two or more tracks are shuffled or swapped
While the system uses the shape of the pigs as the primary target to identify the pigs, in order to maintain identification, each pig is equipped with an industry-standard Destron Fearing ear tag with a distinct color/number combination to provide a secondary form of identificaiton. To allow the system to identify each tag, a collection of 5,290 human annotated ear tag images were used to train a deep classification network. Figure 4 shows 64 sample images of ear tag crops used to train the network. In some cases, the ears are not visible, and the target classification is “unknown.”
Research Objectives
Ear tag classification, together with location/orientation detection, make it possible to perform tracking on individual animals over very long durations. This unprecedented level of detailed observation presents a wide range of research opportunities for animal scientists and behaviorists.
The research team is hoping to answer the following questions:
- How accurate is the tracking system at detecting basic activities like lying, standing, eating, drinking, and distance traveled?
- Can the system detect social behaviors like fighting, playing, belly nosing, and tail biting?
- Does sub-clinical illness in pigs manifest as changes activities and behaviors prior to the presentation of visible symptoms?
- Do illnesses in the nursery phase cause aggression and/or other behavior abnormalities in the finisher phase?
- Can locomotion be used to select healthy gilts as replacement breeding animals?
- Can the system detect changes in locomotion and activity that indicate early signs of lameness?
The team is currently funded on three separate National Pork Board grants to explore all of these questions. To answer question 1, they will perform a large-scale test of the system using 24 cameras simultaneously tracking the activities of 240 pigs during the nursery (three weeks old to 10 weeks old) and finisher (11 weeks old to 26 weeks old) phases. The system has been developed and tested in an earlier deployment and it has been demonstrated the ability to capture lying, standing, eating, and distance traveled. Figure 5 shows a sample output of the current system. In collaboration with researchers at Kansas State University, the team will administer controlled lipopolysaccharide (LPS) challenges to nursery phase pigs to answer questions 2, 3, and 4. Finally, they are collaborating with researchers at the U.S. Meat Animal Research Center (MARC) to answer questions 5 and 6.
This University of Nebraska-Lincoln research team is excited to conduct this precision livestock farming research to help you the responsible producer provide sustainable pork to feed a growing world.