S3A Audio-Visual System for Object-Based Audio dataset

Updated: 02 January 2018
Contact person: Philip Coleman <p.d.coleman@surrey.ac.uk>

Please register before downloading the resources on this page.

Software Tools
Software tool links go here.

Figures 9, 10 and 11 can be recreated using the code and data supplied in here (.zip, 204 MB).

This data was created as part of the EPRSC programme grant "S3A: Future spatial audio for an immersive listening experience at home". The copyright is owned by The Centre for Vision, Speech and Signal Processing, University of Surrey, UK. Permission is hereby granted to use the S3A Audio-Visual System for Object-Based Audio dataset for academic purposes only, provided that it is suitably referenced in publications related to its use as follows:

Coleman, P., et al. (2018) "S3A Audio-Visual System for Object-Based Audio", http://dx.doi.org/10.15126/surreydata.00845514

The article Coleman, P., Et Al. (2018), "An Audio-Visual System for Object-Based Audio: From Recording to Listening", IEEE Transactions on Multimedia [Accepted], Vol. xx, No. xx., must also be cited.

Additional references requested in relation to specific portions of the dataset should also be cited. The dataset may be downloaded by registered users only and must not be redistributed.

Related publications