Skip to content
ricardo kloss edited this page Feb 11, 2016 · 14 revisions

Smart Surveillance Interest Group Library (SSIGLib)

SSIGLib is released under a BSD license and hence it’s free for both academic and commercial use. It is a C/C++ library built using the OpenCV and the C++ Standard Template Library to provide a set of functionalities to aid researchers not only on the development of surveillance systems but also on the creation of novel solutions for problems related to video surveillance.

Users

The new and redesigned version of SSIGLib is under development. If you are interested in an old version of Smart Surveillance Framework, please send an email for antonio.nazare@dcc.ufmg.br requesting the download.

Contributors

Any help is welcome :D

More About SSIGLib

One of SSIGLib's main goals is to provide a set of data structures to describe the scene to allow researchers to focus only on their problems of interest and use these information without creating such infrastructure to every problem that will be tackled, as it is done in the majority of case nowadays. For instance, if a researcher is working on individual action recognition, he/she would need the first capture the data, detect and track people, and only then recognize their actions. By using the SSIGLib, one just need to launch detection and tracking modules to provide the people’s location and can concentrate only on the problem at hand, action recognition without worrying about how the data representation, storage and communication has to be designed.

The SSIGLib was designed to provide features for a good scene understanding, scalability, real-time operation, multi-sensor environment, usage of low-cost standard components, run-time reconfiguration, and communication control.

The main benefits provided by the use of the SSIGLib are the following:

  • A platform to compare and exchange research results in which researchers can contribute with modules to solve specific problems;
  • A library to allow fast development of new video analysis techniques once one can focus only on his/her specific task;
  • Creation of a high-level semantic representation of the scene using data extracted by low-level modules to allow activity recognition;
  • A testbed to allow further development on activity understanding since one can focus directly on that using real data, instead of annotated data that may prevent the method from working on real environments.