S3A Audio-Visual System for Object-Based Audio dataset

Updated: 19 November 2020
Contact person: Philip Coleman <p.d.coleman@surrey.ac.uk>

Software Tools
Software tools can be found here.

Experiments
Figures 9, 10 and 11 can be recreated using the code and data supplied in here (.zip, 204 MB).

License
This data was created as part of the EPRSC programme grant "S3A: Future spatial audio for an immersive listening experience at home". The copyright is owned by The Centre for Vision, Speech and Signal Processing, University of Surrey, UK. Permission is hereby granted to use the S3A Audio-Visual System for Object-Based Audio dataset for academic purposes only, provided that it is suitably referenced in publications related to its use as follows:

Coleman, P., et al. (2018), "S3A Audio-Visual System for Object-Based Audio, DOI 10.15126/surreydata.00845514.

P. Coleman et al., "An Audio-Visual System for Object-Based Audio: From Recording to Listening," in IEEE Transactions on Multimedia, vol. 20, no. 8, pp. 1919-1931, Aug. 2018, doi: 10.1109/TMM.2018.2794780.

Additional references requested in relation to specific portions of the dataset should also be cited. The dataset may be downloaded by registered users only and must not be redistributed.

Related publications
P. Coleman et al., "An Audio-Visual System for Object-Based Audio: From Recording to Listening," in IEEE Transactions on Multimedia, vol. 20, no. 8, pp. 1919-1931, Aug. 2018, doi: 10.1109/TMM.2018.2794780.