dc.contributor.author | Ouadefli, Mostapha | |
dc.contributor.author | Et-tolba, Mohamed | |
dc.contributor.author | Tribak, Abdelwahed | |
dc.contributor.author | Fernández Ibáñez, Tomás | |
dc.contributor.other | Universidad de Cantabria | es_ES |
dc.date.accessioned | 2025-03-04T16:31:43Z | |
dc.date.available | 2025-03-04T16:31:43Z | |
dc.date.issued | 2025-02 | |
dc.identifier.issn | 1937-8718 | |
dc.identifier.uri | https://hdl.handle.net/10902/35852 | |
dc.description.abstract | Neural networks have become a focal point for their ability to effectively capture the complex nonlinear characteristics of power amplifiers (PAs) and facilitate the design of digital predistortion (DPD) circuits. This is accomplished through the utilization of nonlinear activation functions (AFs) that are the cornerstone in a neural network architecture. In this paper, we delve into the influence of eight carefully selected AFs on the performance of the neural network-based DPD. We particularly explore their interaction with both the depth and width of neural network. In addition, we provide an extensive performance analysis using two crucial metrics: the normalized mean square error (NMSE) and adjacent channel power ratio (ACPR). Our findings highlight the superiority of the exponential linear unit activation function (ELU AF), particularly within deep neural network (DNN) frameworks, among the AFs under consideration. | es_ES |
dc.format.extent | 10 p. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | EMW Publishing | es_ES |
dc.rights | © EMW Publishing. The Electromagnetics Academy. Reproduced courtesy of The Electromagnetics Academy | es_ES |
dc.source | Progress in Electromagnetics Research C, 2025, 152, 111-120 | es_ES |
dc.title | On selecting activation functions for neural network-based digital predistortion models | es_ES |
dc.type | info:eu-repo/semantics/article | es_ES |
dc.rights.accessRights | openAccess | es_ES |
dc.identifier.DOI | 10.2528/PIERC24120508 | |
dc.type.version | publishedVersion | es_ES |