dc.contributor.author | Pardo Franco, Arturo | |
dc.contributor.author | Gutiérrez Gutiérrez, José Alberto | |
dc.contributor.author | Streeter, Samuel S. | |
dc.contributor.author | Maloney, Benjamin W. | |
dc.contributor.author | López Higuera, José Miguel | |
dc.contributor.author | Pogue, Brian Wiliam | |
dc.contributor.author | Conde Portilla, Olga María | |
dc.contributor.other | Universidad de Cantabria | es_ES |
dc.date.accessioned | 2021-03-04T16:06:03Z | |
dc.date.available | 2021-03-04T16:06:03Z | |
dc.date.issued | 2020-04-01 | |
dc.identifier.issn | 0277-786X | |
dc.identifier.issn | 1996-756X | |
dc.identifier.other | FIS2010-19860 | es_ES |
dc.identifier.other | TEC2016-76021-C2-2-R | es_ES |
dc.identifier.uri | http://hdl.handle.net/10902/20859 | |
dc.description.abstract | With an adequate tissue dataset, supervised classification of tissue optical properties can be achieved in SFDI images of breast cancer lumpectomies with deep convolutional networks. Nevertheless, the use of a black-box classifier in current ex vivo setups provides output diagnostic images that are inevitably bound to show misclassified areas due to inter- and intra-patient variability that could potentially be misinterpreted in a real clinical setting. This work proposes the use of a novel architecture, the self-introspective classifier, where part of the model is dedicated to estimating its own expected classification error. The model can be used to generate metrics of self-confidence for a given classification problem, which can then be employed to show how much the network is familiar with the new incoming data. A heterogenous ensemble of four deep convolutional models with self-confidence, each sensitive to a different spatial scale of features, is tested on a cohort of 70 specimens, achieving a global leave-one-out cross-validation accuracy of up to 81%, while being able to explain where in the output classification image the system is most confident. | es_ES |
dc.description.sponsorship | Spanish Ministry of Science, Innovation and Universities (FIS2010-19860, TEC2016-76021-C2-2-R), Spanish Ministry of Economy, Industry and Competitiveness and Instituto de Salud Carlos III (DTS17-00055, DTS15-
00238), Instituto de Investigación Valdecilla (INNVAL16/02, INNVAL18/23), Spanish Ministry of Education, Culture, and Sports (FPU16/05705). | es_ES |
dc.format.extent | 13 p. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | SPIE Society of Photo-Optical Instrumentation Engineers | es_ES |
dc.rights | © 2020 Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. | es_ES |
dc.source | Proceedings of SPIE, 2020, 11362, 113620I | es_ES |
dc.source | Clinical Biophotonics Conference, France (Online), 2020 | es_ES |
dc.title | Automated surgical margin assessment in breast conserving surgery using SFDI with ensembles of self-confident deep convolutional networks | es_ES |
dc.type | info:eu-repo/semantics/conferenceObject | es_ES |
dc.relation.publisherVersion | https://doi.org/10.1117/12.2554965 | es_ES |
dc.rights.accessRights | openAccess | es_ES |
dc.identifier.DOI | 10.1117/12.2554965 | |
dc.type.version | publishedVersion | es_ES |