Module deepposekit

You have just found DeepPoseKit.

DeepPoseKit is a software toolkit with a high-level API for 2D pose estimation of user-defined keypoints using deep learning—written in Python and built using Tensorflow and Keras. Use DeepPoseKit if you need:

  • tools for annotating images or video frames with user-defined keypoints
  • a straightforward but flexible data augmentation pipeline using the imgaug package
  • a Keras-based interface for initializing, training, and evaluating pose estimation models
  • easy-to-use methods for saving and loading models and making predictions on new data

DeepPoseKit is designed with a focus on usability and extensibility, as being able to go from idea to result with the least possible delay is key to doing good research.

DeepPoseKit is currently limited to individual pose estimation. If individuals can be easily distinguished visually (i.e., they have differently colored bodies or are marked in some way), then multiple individuals can simply be labeled with separate keypoints (head1, tail1, head2, tail2, etc.). Otherwise DeepPoseKit can be extended to multiple individuals by first localizing, tracking, and cropping individuals with additional software such as idtracker.ai, pinpoint, or Tracktor.

Localization (without tracking) can also be achieved with deep learning software like keras-retinanet, the Tensorflow Object Detection API, or MatterPort's Mask R-CNN.

Check out our paper to find out more.

NOTE: This software is still in early-release development. Expect some adventures.

How to use DeepPoseKit

DeepPoseKit is designed for easy use. For example, training and saving a model requires only a few lines of code:

from deepposekit.io import DataGenerator, TrainingGenerator
from deepposekit.models import StackedDenseNet

data_generator = DataGenerator('/path/to/annotation_data.h5')
train_generator = TrainingGenerator(data_generator)
model = StackedDenseNet(train_generator)
model.fit(batch_size=16, n_workers=8)
model.save('/path/to/saved_model.h5')

Loading a trained model and running predictions on new data is also straightforward. For example, running predictions on a new video:

from deepposekit.models import load_model
from deepposekit.io import VideoReader

model = load_model('/path/to/saved_model.h5')
reader = VideoReader('/path/to/video.mp4')
predictions = model.predict(reader)

Using DeepPoseKit is a 4-step process:

For more details:

"I already have annotated data"

DeepPoseKit is designed to be extensible, so loading data in other formats is possible.

Have data in another format? You can write your own custom generator to load it. Check out the example for writing custom data generators. Open In Colab

If you have annotated data from DeepLabCut (http://deeplabcut.org), try our (experimental) example notebook for loading data in this format. Open In Colab

Installation

DeepPoseKit requires Tensorflow for training and using pose estimation models. Tensorflow should be manually installed, along with dependencies such as CUDA and cuDNN, before installing DeepPoseKit:

DeepPoseKit has only been tested on Ubuntu 18.04, which is the recommended system for using the toolkit.

Install the latest stable release with pip:

pip install --update deepposekit

Install the latest development version with pip:

pip install --update git+<https://www.github.com/jgraving/deepposekit.git>

You can download example datasets from our DeepPoseKit Data repository:

git clone <https://www.github.com/jgraving/deepposekit-data>

Installing with Anaconda on Windows

To install DeepPoseKit on Windows, you must first manually install Shapely, one of the dependencies for the imgaug package:

conda install -c conda-forge shapely

We also recommend installing DeepPoseKit from within Python rather than using the command line, either from within Jupyter or another IDE, to ensure it is installed in the correct working environment:

import sys
!{sys.executable} -m pip install --update deepposekit

Contributors and Development

DeepPoseKit was developed by Jake Graving and Daniel Chae, and is still being actively developed. .

We welcome community involvement and public contributions to the toolkit. If you wish to contribute, please fork the repository to make your modifications and submit a pull request.

If you'd like to get involved with developing DeepPoseKit, get in touch (jgraving@gmail.com) and check out our development roadmap to see future plans for the package.

Issues

Please submit bugs or feature requests to the GitHub issue tracker. Please limit reported issues to the DeepPoseKit codebase and provide as much detail as you can with a minimal working example if possible.

If you experience problems with Tensorflow, such as installing CUDA or cuDNN dependencies, then please direct issues to those development teams.

License

Released under a Apache 2.0 License. See LICENSE for details.

References

If you use DeepPoseKit for your research please cite our open-access paper:

@article{graving2019deepposekit,
         title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning},
         author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
         journal={eLife},
         volume={8},
         pages={e47994},
         year={2019},
         publisher={eLife Sciences Publications Limited}
         url={<https://doi.org/10.7554/eLife.47994},>
         }

You can also read our open-access preprint:

@article{graving2019preprint,
         title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning},
         author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
         journal={bioRxiv},
         pages={620245},
         year={2019},
         publisher={Cold Spring Harbor Laboratory}
         url={<https://doi.org/10.1101/620245}>
         }

If you use the imgaug package for data augmentation, please also consider citing it.

If you're using data annotated with the DeepLabCut package (http://deeplabcut.org), be sure to cite it.

Please also consider citing the relevant references for the pose estimation model(s) used in your research, which can be found in the documentation (i.e., StackedDenseNet, StackedHourglass, DeepLabCut, LEAP).

News

Expand source code
# -*- coding: utf-8 -*-
# Copyright 2018-2019 Jacob M. Graving <jgraving@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#    http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import absolute_import
import sys
import warnings

from deepposekit.io import TrainingGenerator, DataGenerator
from deepposekit.augment.FlipAxis import FlipAxis

from deepposekit.annotate.gui.Annotator import Annotator
from deepposekit.annotate.gui.Skeleton import Skeleton
from deepposekit.annotate.KMeansSampler import KMeansSampler

from deepposekit.io.video import VideoReader, VideoWriter


__doc__ = open('README.md').read()
__version__ = "0.3.4.dev"

Sub-modules

deepposekit.annotate
deepposekit.augment
deepposekit.callbacks
deepposekit.io
deepposekit.models
deepposekit.utils