publications
Publications with the keyword: image captioning
View all publications
- [7]
- Transfer learning from language models to image caption generators: Better models may not transfer better (Tanti, M; Gatt, A and Camilleri, KP), arXiv preprint, volume 1901.01216, 2019.
- [6]
- Quantifying the amount of visual information used by neural caption generators (Tanti, M; Gatt, A and Camilleri, K), In Computer Vision – ECCV 2018 Workshops: Proceedings of the Workshop on Shortcomings in Vision and Language (Leal-Taixé, L; Roth, S, eds.), Springer, 2019.
- [5]
- Pre-gen metrics: Predicting caption quality metrics without generating captions (Tanti, M; Gatt, A and Muscat, A), In Computer Vision – ECCV 2018 Workshops: Proceedings of the Workshop on Shortcomings in Vision and Language (Leal-Taixé, L; Roth, S, eds.), Springer, 2019.
- [4]
- Where to put the image in an image caption generator. (Tanti, M; Gatt, A and Camilleri, K), Natural Language Engineering, volume 24, 2018.
- [3]
- Face2Text: Collecting an Annotated Image Description Corpus for the Generation of Rich Face Descriptions (Gatt, A; Tanti, M; Muscat, A; Paggio, P; Farrugia, R; Borg, C; Camilleri, K; Rosner, M and van der Plas, L), In Proceedings of the 11th edition of the Language Resources and Evaluation Conference (LREC'18), 2018.
- [2]
- Predicting visual spatial relations in the Maltese language (Muscat, A and Gatt, A), In Breaking Barriers: Junior College Multidisciplinary Conference, University of Malta Junior College, 2018.
- [1]
- What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator? (Tanti, M; Gatt, A and Camilleri, K), In Proceedings of the 10th International Conference on Natural Language Generation (INLG'17), Association for Computational Linguistics, 2017.