That’s right my fellow Kinenerds, we want your images and we want them now! (or at a time that suits you, no pressure) As I know you are all following along with this blog and keeping up to date with the latest goings on with Kinesense, I know you’ve all heard about and read about our new Object Classification models. We’re constantly working to improve these models and we can only do that with your help.
How can we help?
That’s an excellent question, reader. The first step would be to get in touch with our technical team, either emailing us directly or a firstname.lastname@example.org. You can either give us footage or output images.
For output images, one of our technical team can remote into an available PC and set everything up for you. The video will be imported into Kinesense with Object Classification turned on. The images that Kinesense finds will be saved to a folder. The output of that folder can then be used by Kinesense to improve the Object Classification model.
Of course you can try this out yourself by selecting the advanced settings for the event detection algorithm, select Use Object Classification and then select the debug option Output Detections? and set the folder you want the images saved to.
Of course with GDPR in the back of everyone’s mind when they talk about sharing information, you’re probably worried about giving us your CCTV footage. Well don’t worry as the sharing of CCTV data is both legitimate and lawful for research and development under both the Data Protection Act (Schedule 2, part 6) and GDPR Article 5(1)(B) and Article 13 of the EU Charter of Fundamental Rights and is encouraged by the UK Home Office and Defence and Security Accelerator who champion innovation in technology for security and policing.
All the test images we use are stripped of their original source metadata, timings, locations, etc and kept in a low resolution cropped format that doesn’t contain any personal data.
Please contact us email@example.com with any questions you have or if you are interested in helping us improve Kinesense.