Mostrar el registro sencillo

dc.contributor.authorDuque Medina, Rafael 
dc.contributor.authorBravo, Crescencio
dc.contributor.authorBringas Tejero, Santos 
dc.contributor.authorPostigo Díaz, Daniel
dc.contributor.otherUniversidad de Cantabriaes_ES
dc.date.accessioned2024-01-19T18:10:49Z
dc.date.available2024-01-19T18:10:49Z
dc.date.issued2024-04
dc.identifier.issn0167-739X
dc.identifier.issn1872-7115
dc.identifier.otherPID2019-105660RB-C22es_ES
dc.identifier.urihttps://hdl.handle.net/10902/31170
dc.description.abstractUser interfaces for digital twins (DTs) should provide information that allows the user to be aware of the state of the physical entity that is virtualised. Typically, this real entity is a shared space in which various human and artificial agents interact (for instance, in smart cities, various citizens and vehicles interact; in manufacturing, operators and machinery cooperate in the production, etc.) and the user interface must provide information about its state. This work presents ADD (Awareness Description Diagrams) as a technique for modelling requirements of Human-DT interaction. A study was conducted in order to virtualise a natural space where groups of hikers carry out their activities. This study assesses the learning curve of ADD, its feasibility to model Human-DT requirements, and its utility to design user interfaces. The results of the study provide valuable insights into the effectiveness of the ADD technique.es_ES
dc.description.sponsorshipThis work is funded in part by the Dirección General Universidades, Investigación e Innovación, Spain, Junta de Comunidades de CastillaLa Mancha, Spain, and the European Regional Development Fund (ERDF), grant number SBPLY/21/180501/000244. This work has been supported by grant "Redes de Interconexion, Aceleradores Hardware 𝑦 Optimizacion de Aplicaciones" (PID2019-105660RB-C22, funded by MCIN/ AEI /10.13039/501100011033).es_ES
dc.format.extent11 p.es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.rights© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)es_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.sourceFuture Generation Computer Systems, 2024, 153, 41-51es_ES
dc.subject.otherDigital twinses_ES
dc.subject.otherCollaboration and interaction awarenesses_ES
dc.subject.otherHuman–computer interactiones_ES
dc.subject.otherInternet of Thingses_ES
dc.subject.otherUser interfaceses_ES
dc.subject.otherVisual languageses_ES
dc.titleLeveraging a visual language for the awareness-based design of interaction requirements in digital twinses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.relation.publisherVersionhttps://doi.org/10.1016/j.future.2023.11.018es_ES
dc.rights.accessRightsopenAccesses_ES
dc.identifier.DOI10.1016/j.future.2023.11.018
dc.type.versionpublishedVersiones_ES


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo

© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)Excepto si se señala otra cosa, la licencia del ítem se describe como © 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)