Optimum multi-service access selection over heterogeneous wireless networks

This paper proposes a methodology to analyse connectivity over highly heterogeneous wireless networks. We consider a scenario comprising a large number of access elements and end users, who move and initiate different services according to some patterns. The framework that has been implemented takes periodic snapshots, each of them used to pose a different optimization problem. We take into account the intention of end users to have a connection (those with an active service), disregarding idle users, as well as the outcome of the previous problems (for instance, the base station a particular user was connected to). The feasibility of the methodology proposed in this work is assessed with a scenario over which we study different access selection strategies including the following criteria: price of resources, service afﬁnity towards particular technologies as well as the willingness to reduce the number of handovers. The results validate the proposed methodology and highlight the impact that an appropriate design of the access selection strategy may have. Copyright © 2014 John


INTRODUCTION
It is estimated that mobile traffic will grow to 10 times between 2013 and 2019 [1] and approximately 90% of the world population would be able to use a WCDMA/HSPA (Wideband Code Division Multiple Access/High-Speed Packet Access) connection, while the penetration of LTE (Long Term Evolution) is continuously increasing as well; in this sense, the aforementioned report estimates that more than 65% of the world population will be covered by LTE in 2019. On top of this, it is worth highlighting the remarkable increase on the number of advanced devices (smartphones and tablets) with a cellular connection. This usually incorporates other radio access technologies (RAT), and thus, the relevance of the so-called multi-RAT networks is likely to increase.
With the aforementioned scenario in mind, the relevance of an optimum management of the available resources within the wireless access networks has taken roots again, gathering the interest of the scientific community. Despite the large efforts that were made last decade, just after the appearance of the always best connected motto [2], there are still many new challenges and aspects to be looked at.
In this sense, there exists a large number of works looking at different techniques, algorithms and protocols to better manage the resources within wireless access networks by inspecting the new possibilities that are brought about by avant-garde elements which have recently loomed. Some of them base their conclusions on the comparison with different alternatives and approaches, but in some cases, it would be also interesting to know the gap with the best possible solution.
This work aims at answering the following question: What is the best performance that might be expected over a heterogeneous wireless access network? In order to give a reasonable answer, we pose a series of optimization problems in particular belonging to the binary linear programming group. We define a flexible utility function able to integrate various criteria for both the network operators (for instance, load) and end users (for example, price) and provides a certain benefit to each of the access alternatives.
One of the most relevant aspects of this proposal is that, in order to establish the problem to be solved, we take into consideration the willingness of the users to have an active connection, depending on whether the service is active or idle (i.e. not all users want to be always connected), as well as how that service was previously handled (for example, the base station (BS) the user was connected to).
The paper is structured as follows. First, Section 2 outlines some of the work sharing the background of this one. Section 3 discusses the formulation of the problem, paying special attention to service modelling. Section 4 briefly describes some of the most relevant aspects of the implementation that has been done. Section 5 presents the particular access selection strategies which will be challenged in the scope of this work, whereas Section 6 assesses the feasibility of the procedure, by showing a number of results obtained with the developed framework for a particular scenario, over which we study a price-based load balancing scheme. Finally, Section 7 concludes the paper, advocating some items that are left for future work and outlining some of the potential possibilities that might appear with the exploitation of the developed framework.

RELATED WORK
As said earlier, the always best connected motto was originally proposed by Gustafsson and Johnson in 2003 [2]. This research line took roots in the last decade, and several proposals were made so as to optimally manage the resources in wireless access networks, providing the best possible quality of service to the end users. The reader might refer to [3][4][5] and the references therein for a thorough review of some of the approaches which were made at that time.
Afterwards, many new possibilities have been brought about by the novel techniques which have been loomed. A clear example of these is the capability to virtualise resources at the access network [6] as a means to offer tailored quality of experience to the end users. Another technique with a potential impact on the performance of wireless access networks is multi-path (possibility to split a flow between different paths, so as to increase performance or reliability) in multi-homed-enabled devices [7]. In addition, we have also seen that new requirements have also gained relevance, such as the need to optimise energy consumption [8]. Considering these aspects and analyzing their influence over the performance and behaviour of multi-RAT networks is still an open research issue.
All in all, the proposals that were made around 5 years ago, although conceived so as to have a certain degree of flexibility, might not be able to solve all the new challenges and requirements which are continuously appearing. Recently, a framework to promote Open Connectivity Services has been proposed [9] as a new paradigm to manage connectivity in the forthcoming communication scenarios.
A common aspect from most of the works which are within the previously described research line, that is, analysing algorithms, protocols and so on to foster a better operation of heterogeneous wireless access networks is that they compare their performance with that shown by legacy alternatives, but they do not assess how close they are to an optimal behaviour [7,10]. The main goal of this paper is to propose a way to find this optimum solution, considering different parameters of merit over highly heterogeneous networks in which end users are able to use different types of services. This work is an evolution from the framework which we originally presented in [11]. We have added support for an appropriate modelling of services (as opposed to the always-want-to-be-connected approach used in [11]), with a clear influence on how the problem needs to be posed and which led to the integration of a module to keep track of the history of previous connections. This is further discussed in Section 3.
As said earlier, we use binary linear programming to find the optimum solution. To our best knowledge, there are not many works that have fostered this approach. For instance, the authors of [12] propose a selection mechanism that uses a utility function to prioritise access alternatives, and they strengthen the application and energy aware behaviour. They employ an integer linear programming-based technique to seek a solution that maximises the aforementioned utility. On the other hand, the whole procedure is only initiated upon a handover (HO) event, leaving aside situations in which there might be better access alternatives available even if an HO was not strictly required.
Other works that use optimisation techniques are [13,14], but they propose utility functions that are quite tight to the scenario they consider leaving aside relevant parameters, such as the HO. In particular, [13] focuses on load balancing and efficient radio resource management mechanisms, and [14] provides greater relevance to the physical parameters of wireless technologies and is therefore more limited when it comes to including other figures of merit. All in all, the methodology we present in this paper has a broader view on its scope; in this sense, by using some more abstractions, we gain some additional flexibility in terms of the criteria we can integrate within the access selection strategy. Besides, none of the aforementioned works model services as we do herewith.

PROBLEM FORMULATION
The access selection problem aims at establishing the optimum association of the current active flows (services) for all the users amongst the available access alternatives. The scenario comprises N available access networks, which can use various technologies, thus having different characteristics, in terms of coverage and capacity. We also assume that there are U users, who can simultaneously start S different flows/services, equipped with a terminal able to establish a connection with any of the involved technologies.
We clarify herewith that in the scope of this work, we assume that both services and BSs use a generic and discrete capacity unit (the so called Traffic Unit, TU), no matter it refers to time slots (Time Division Multiple Access, TDMA), codes (Code Division Multiple Access, CDMA), sub-carriers (Orthogonal Frequency Division Multiple Access, OFDMA) and so on. Any service would require a number of TUs to be properly handled by the network (if they cannot be assigned, then the service is rejected/dropped) and each of the deployed BSs would have a limit on the number of resources (TU) it can assign to the users † . Other works such as [13,15] also use this abstraction.
We formulate the problem as a binary integer programme, in which there are U N S basic variables (x ijk ), which can be defined as follows.
x ijk D 1 if user i uses network j for service k 0 otherwise (1) We will use a generic utility function (u ijk ) so as to qualify the goodness of a particular connection, according to different criteria. It is important to remark that this utility, although it depends on the particular circumstances and the values the different criteria can take for each of the available access alternatives, can be considered constant for a problem instance, thus ensuring the linearity of the proposed model (i.e. u ijk does not depend on x ijk ). In addition, we pose three constraints to the problem: (3a) ensures that the basic variables to be solved are binary, (3b) forces that a single flow can be only connected to one BS, whereas (3c) limits the number of TUs which can be assigned by BS j (with capacity C j ) according to the capacity required by each of the services (c k ). With all of the aforementioned statements, we can pose the maximisation problem as follows.

Max.
Note that not all of the basic variables are part of the problem, because for instance, we need to discard those which are not possible because of the lack of physical connectivity between user i and BS j, x ijk D 0 8k. In addition, when a particular service (say k) at end user i is not active, we can also add some additional constraints to the aforementioned problem: x ijk D 0 8j.
As mentioned earlier, one of the most relevant contributions of this work compared with our previous paper [11] is that we do not consider all users for every instance of the optimisation problem but only those who are willing to have a connection. In [11], we looked at the overall connectivity, and therefore, every user wanted to have a connection, no matter whether he/she had an ongoing service or not. In this sense, the way the problem is formulated herewith is more elaborated. In order to appropriately include † In this work, we abstract the differences between technologies and we do not consider the impact that the particular physical conditions of the links might have over the capacity provided by each of the resource units assigned by the BSs. service modelling, we consider that a particular application flow, once it is rejected or dropped, is not considered in the following optimisation problems (until the service is restarted again). For that, we need to maintain the history of the previous optimisation outcomes, and we define a state machine for all possible user/service combinations. As a result, we establish four different circumstances for each of them as shown in the following.
Idle. The user i is not currently having service k active, and therefore, he/she does not require any connectivity for it. Active. The user i has an ongoing connection for service k and this has been accepted by some of the available access alternatives. Rejected. The user i initiated a flow for service k, but it was not accepted by the network. Dropped. The user i had an ongoing flow for service k which was originally accepted, but it eventually stopped before its correct finalisation due to user mobility or other events.
With the aforementioned states and considering that the optimisation, for a single analysis run, consists of the resolution of a series of consecutive snapshots, we can establish the evolution of the state for a particular user-service pair as shown in Figure 1. The arrows represent state transitions, which depend on two aspects: the intention of such user for a particular service and the outcome of the current optimisation problem. These are the two numbers on top of each arrows (state transitions): intention/optimisation_result. It is straightforward to see that depending on the previous state and the current intention, there are some cases in which the corresponding variables do not enter the optimisation problem represented in the figure by dashed arrows. No matter what the previous state is, when user i does not have the intention of having service k connected, then the corresponding basic variables do not enter the optimisation problem as discussed earlier (x ijk D 0 8j) and such pair goes to the idle state. In addition, whenever a service has been rejected or dropped, the corresponding variables are left aside the optimisation problem even if the intention is to have a connection. Furthermore, a connection that has been lost cannot be recovered and thus the state is kept until it goes back to the idle state. This assumption is required so as to make the traces (and therefore the traffic demand) independent from the outcome of the analysis; in this sense, we could study various access selection strategies with the same conditions.
In most of the cases discussed earlier, the optimisa-tion_result is equal to zero, because the corresponding basic variable does not belong to the optimisation problem. On the other hand, the continuous lines show the transition of the user/service pairs towards the final state, depending on the outcome of the current optimisation problem.

IMPLEMENTATION
A solution of a single optimisation problem refers to the optimum connectivity of a particular situation, which is given by the current position of the end users, their willingness to have a connection for a particular service, as well as the current status of the network (remaining capacity). Those particular situations can therefore be seen as snapshots of a scenario in which users move and generate flows for their corresponding services within a certain period. Those patterns (traffic and movement) are provided by means of traces. In order to solve a particular scenario, a tool has been developed from scratch. It has two well-defined modules: the first one solves each of the optimisation problems, which is posed by the series of snapshots; a process is created per problem and, thus, there is not a 'real' connection between the current problem and the previous solutions. As was discussed before, to properly pose the optimisation problem, it is important to be able to take into account the history of the scenario (services which were rejected or dropped, load at the BSs, etc.); in order to keep track of this historical evolution and to provide the information to the current snapshot, another module, the monitor, has been added to the tool (this was not needed for the previous stateless approach [11]). It analyses the outcome of the optimisation problem and keeps it so as to be used by the next snapshot; in addition, it is in charge of maintaining the overall statistics of the scenario in order to process them when all the snapshots have been solved.
As said earlier, the tool is fed with a set of traces that must reflect (i) the position of the end users, according to some predefined mobility pattern, and (ii) the intention of all the users to have a connection for each of their services. Figure 2 shows an illustrative example of three users and their services and how these are mapped onto the corresponding snapshots. It is worth mentioning that the time in which a service remains at the idle or active states shall be always higher than the snapshot period, so as to keep track of the new incoming flows, as it is the case of service 0 for user i on Figure 2. For a particular user/service (i/j) pair, we can thus define the vector j i D OE! 0 , ! 1 , : : : , ! N , in which ! k is the status of such service at the k th snapshot. As can be seen, the format that has been selected for the traces processed by the application is rather generic, and, therefore, its use can be easily extended (for comparison purposes) to other studies.
The procedure that is being followed, for each of the snapshots, is shown in Procedure 1. We divide it into three different phases: (i) first, we establish the scenario by reading the overall configuration parameters, the position and characteristics of access elements (BS, access point) and the particular position and service status for the end users; all this information are gathered from external files. (ii) Once the scenario is deployed, we establish the physical connectivity between end users and access elements and the current state of each user/service pair; we also build the utility function by considering, depending on the particular configuration, the previous access element (if the user/service pair was already connected), the current load of the access elements and the price that shall be offered to a particular user/service. (iii) Once the utility function is established, the problem is solved by using a solver (in this work, we have used the GLPK library [16]), and the result is processed to update the state of all the user/service pairs and the corresponding statistics.

ACCESS SELECTION STRATEGIES
As mentioned earlier, the main idea is to be able to model various criteria with the utility function to assess the goodness of a particular connection. There exists a broad range of different possibilities, and within the scope of this paper, we will focus on two particular aspects: pricing and RAT affinity (RA) (which can be described as the preference to connect particular services to specific technologies), as well as their combination. In addition, we will also study their interaction with the cost of changing the BS, that is, the preference an end user would have towards maintaining the current access. We will see that different combinations of these criteria may lead to rather distinct performances, so an appropriate selection of the strategy is of utmost relevance.

Common criteria
Two criteria are used for the various strategies that will be analysed: the willingness to have a connection and the intention to reduce the number of HO. These are discussed as follows.

Connectivity.
As mentioned earlier, we will give some utility to the connectivity per se, prioritising ongoing services over new calls, because it is sensible to assume that dropping an already established call is worse (from the perspective of the quality of experience) than rejecting a new one. This criterion (˛i jk ) is defined as follows: where c k is the capacity required for service k and (design parameter) is selected so as to ensure that an ongoing service is always given a higher priority than a new call (for all the considered capacities): < min 8k c k max 8k c k . As can be seen the definition of this criterion, considers the capacity required by each of the services; otherwise, those needing less capacity would be given (on an indirect way) a higher priority. In this sense, a connection of a service requiring two TUs has the same utility as a connection of one TU service, while it consumes twice the resources; then, the optimisation engine will favour two connections of the latter service over one belonging to the former one.

Handover.
We will consider the influence that the cost of change (HO) might have on the assessed performance. In this sense, we model the willingness to keep the current access (so as to include the cost of change). For that, we use thě ijk criterion defined as follows: if user/service i=k was previously connected to BS j 1 otherwise (5) where < 1 is a design parameter which would be selected depending on the particular configuration, as will be discussed later.

Price criterion
The idea is to favour the monetary preference of the end users, who would opt for more economical connections. The utility a user would perceive for certain service connections would increase as far as the BSs offer a lower price; we are thus looking for a decreasing function. Because we want to enable relative comparisons (i.e. based on discount percentages) between offered prices, we propose using a logarithm function as follows (see Figure 3(a)), where p j is the price offered by BS j given as monetary units per time and capacity unit. where p BS min and p BS max correspond to the highest and lowest price offered by BS j to use its resources, respectively. As can be seen, the first one is the fee below which end users would not perceive any utility gain. We also assume that those BSs that offer a price higher than the maximum an end user would be willing to pay (user preferences) would be discarded. Relative price units are used, and therefore, the maximum price offered by any of the BSs would be 1.0. As will be seen later, we will also assume that p BS min D 0.1 monetary units per time and capacity unit.
Moreover, we will also consider that BSs are using their offered price so as to encourage or deter users to connect to them; this is reflected in Figure 3(b), which represents the fee offered by a BS as a function of the currently relative available load. As can be seen, when the BS is highly loaded (available capacity lower than L th low ), the offered price is the maximum allowable one (operator policies and rules); on the other hand, if the BS load is low, the offered price decreases to a minimum configured level. For the sake of simplicity, a linear decreasing trend has been used between these two points; in the analysis that will be discussed in Section 6, we have used 0.2 and 0.8 for the lower and upper thresholds, respectively.

RAT affinity criterion
This criterion is conceived to favour that particular services are handled by preferred technologies so as to bring about a better quality of experience. For instance, we could use WiFi accesses for data transfer services, because they benefit from higher bandwidths but do not require a strict delay, while we would establish a preference of voice services towards cellular BSs, better suited for that. The corresponding component of the utility function (ı ijk ) is therefore defined as follows: 1 if service k has affinity towards technology of BS j otherwise (7) where < 1 is a design parameter, which modulates the relevance which is given to this particular aspect of the utility function.

Utility function
By linearly combining all the introduced criteria, we can define an overall utility function, which establishes the corresponding access selection strategy.
There are two possibilities to favour one of the criteria over the others: (i) changing the corresponding factors (A, B, C, D); or (ii) use only a binary version of them (i.e. they are activated or not) and use the corresponding design parameters ( , , ); in the scope of this work, we will use the second alternative. In particular, we will fix that D 0.4, D 0.8, while we discuss the value for in the succeeding text. For that, we do not consider RA (i.e. D D 0). We then compare the utilities for two different access alternatives for an ongoing service (note that the value of˛i jk is not relevant, because it will be alike for the two accesses). The end user would have two choices, the one to which she is currently connected to (i) and another one (ii), which is offering a cheaper fee than (i). In particular, we assume that p 2 is 100 % lower than p 1 , that is, p 2 D .1 / p 1 , with < 1. Hence, we can write Therefore, if we consider that a discount of 20% should be enough for an end user to change her current access, the value of to be used is 0.1.
With all the aforementioned text in mind, we can define the six different strategies that will be used during the analysis presented in the next section, as defined in Table I. Each of them corresponds to a different instance of the utility function that was introduced earlier.

RESULTS
This section presents some results obtained by applying the method that has been previously presented over a particular network deployment. The goal is twofold: first, to assess its feasibility, and furthermore, to discuss the impact of an appropriate selection of the utility function.
In particular, we will consider a 200 200 m 2 scenario, in which we deploy two different types of BSs (see Figure 4). The first one corresponds to a traditional cellular technology, with a coverage of 150 m, effectively covering the whole scenario, and a capacity of 16 TUs, whereas the second one mimics a WiFi access router, with a range of 50 m and a capacity of eight TUs. Over such scenario, we deploy a number of users, which we increase from 20 to 200; they move according to a random waypoint model, whose parameters are given in Table II. Each user generates flows belonging to two different service types, following ON-OFF models, having some preference towards a particular technology. All the BSs use the pricing policy which was depicted before (Figure 3(b)), and thus, they offer a higher fee when the current carried load is higher. We run 10 independent simulations per scenario (each of them comprises 360 different optimisation problems, corresponding to a simulation time of 1 h and taking snapshots every 10 s), and we represent the corresponding average results. We consider the six strategies which were introduced in Table I, and we analyse the figures of merit which are enumerated as follows for the two types of service which are being used (0, 1).
Success rate (SR). Probability that a service is successfully finished (i.e. it is neither rejected nor dropped). HO. Average number of HO which are carried out during a service lifetime. Price per service (PS). Average service price paid per time and traffic unit. RA. We study the percentage of the time that the service was using the technology towards which it has some preference.
Before discussing the performance metrics that were obtained, Figure 5 shows the traffic demand that was generated by all the users, relative to the overall network capacity. Note that the traffic demand does not exactly correspond to the offered traffic (that would show a linear increase with a slope of 0.7-traffic offered per user according to the parameters shown in Table II)   one that is used to pose the corresponding optimisation problem. As was mentioned earlier, whenever a service is rejected or dropped, it is not longer considered in the future problems until it is restarted. This would effectively reduce the traffic demand, explaining the saturation behaviour that is observed in Figure 5(a). We also represent ( Figure 5(b)) the relative carried load per RAT type; we can see that the cellular BS gets almost fully loaded (for the three strategies ‡ ), because it covers the whole scenario. The load of the WiFi access points increases as long as the number of users get higher. Despite that the traffic demand surpasses the network capacity, there are still some available resources in these access points, because they do not fully cover the area under analysis. First, Figure 6 shows the probability for a service to be successful. As can be seen when there is enough available capacity (i.e. # of users 20 or 40), the utility function definition (in particular the˛parameter) leads to having similar results for the two types of services (if we had not considered the capacity in the corresponding definition, the SR for service 1 would have been lower). Afterwards, the probability for a service to be successful is higher for service 0, because it requires fewer resources. There is an exception, for the price w/o HO strategy, in which service 1 has a better performance than service 0. In this case, cheaper connections will be favoured (without looking at the possible HO side effects), and therefore, service 0 calls would try to use WiFi accesses as soon as they become available § , opposed to the other strategies, where the RA ‡ In this case, we have used the strategies that considered the HO criterion, because the results obtained for the others are alike. § Those accesses are likely to be cheaper because the corresponding access points are less loaded, as can be seen in Figure 5. criterion would foster that service 0 calls are handled by the cellular BS. Afterwards, because of user mobility, connection might not be longer available, and the call would probably need to be dropped, because the cellular BS might be fully loaded (see Figure 5). In addition, there is as well an increase on the number of rejected services, because it is rather unlikely that a new call from service 0 would cause a call from service 1 to be dropped (the additional utility of the price criterion is low, because the load/offered price is high); on the other hand, in those strategies strengthening the RAT affinity criterion, a new call from service 0 would quite likely cause an ongoing service 1 call to be dropped (if it is using the cellular BS) because of the stronger weight given to this particular criterion. This, together with the fact that the service 0 calls are longer than those corresponding to service 1, explains the aforementioned exception.
On the other hand, Figure 7 yields a clear impact of considering the HO criterion. We can see a decrease of the number of HO (in particular for service 0) when the HO criterion is considered in the utility function (B D 1). The impact is less relevant for service 1, especially when the number of users are large, because the networks get highly loaded (which is more relevant for the cellular BS, see Figure 5) and there might not be many connection alternatives. On the other hand, when B D 0 (i.e. the cost of change is not considered), we can also see (again for service 1) that the RA strategy leads to a lower number of HO, whereas the behaviour for service 0 is different. This is in part due to the ping-pong effect which might loom in this particular strategy (a double change of access will not have any impact over the overall utility function), which is more relevant for service 0 flows, because they are longer.  Regarding the price that a user needs to pay (per traffic and time unit) per service, Figure 8 yields an interesting result. For service 0, we get the expected behaviour, and the price strategy leads to lower prices (as compared with the RA one). However, for service 1, we can see that the RA strategy leads to prices which are slightly lower than those obtained for the price-based one. The reason is that the WiFi access points (due to their coverage) are less loaded than the cellular BS and thus the price they offer (following the previously presented pricing policy) is lower. Because services belonging to type 0 would stick to the cellular BS (no matter the price is), according to the RA criterion, the prices offered by WiFi access points are lower, and therefore, the price paid per service (type 1) is consequently lower. On the other hand, this result also reflects the fact that the price-based strategy just seeks a global price reduction, without distinguishing between service types; the prices for both of them tend to be alike. In this case, it is also worth highlighting that there is not a clear dependency on the HO criterion, because the results do not change between the utility functions that did not take it into account and those which did so. Besides, the figure also yields that the higher the carried load, the higher the PS. It is also interesting to note that, for service 0, the RA strategy always leads to a higher price, because it aims at handling all the corresponding calls by the cellular BS, which gets fully loaded, even when the number of users are still small (see Figure 5(b)). Figure 9 shows the probability for a service to be connected to the technology it has certain affinity to. We shall see that the use of the appropriate strategy does really have a relevant effect regarding this figure of merit, because the    results obtained for the RA strategy are much higher than the ones which were observed for the price one. We can also observe the little influence of the HO criterion. Last, but not least, the obtained results also yield that service 1 gets better results than service 0 regarding the RA criterion for all the considered strategies. This is because that service 1 prefers using WiFi access points, and as we have already discussed, these access alternatives are less loaded J. Choque et al. than the cellular BS; in addition, it is also worth bearing in mind that those services are shorter and it is therefore less likely that they would need to connect to the cellular BS due to user mobility after being connected to a WiFi access router.
In order to get a global view on the overall performance of the various strategies, Figure 10 uses a spider graph in which we represent the four figures of merit (for the two types of service). The edges of the different axes represent the best potential performance for the four of them, whereas the center of the spider can be considered as the worst result (for the HO parameter, we used the empirically observed values to establish the performance bounds). The results were obtained for the scenario with 100 users. As was already discussed, we can see the improvement brought about by the integration of the HO criterion on the utility function, because the rest of the criteria do not get clearly affected, whereas there is an enhancement in the number of HO that is required per service. In the case of the price strategy, we can also see that the use of the HO criterion brings about an improvement on the SR for service 0, although it is compensated by the decrease on the corresponding value for service 1. In general, all figures yield that the strategies considering the two goals (i.e. price and RA) lead to performances that are rather similar to those obtained for the one strengthening just RA (without jeopardising the PS too much), whereas for the price-based strategy, the performances in terms of RA are remarkably worse.

CONCLUSIONS
In this paper, we have proposed the use of binary linear programming techniques to assess the best performance, which might be expected on highly heterogeneous access environments. The developed framework, which considers the time-evolution of services, is generic enough so as to extend its usage to different use cases. We have assessed its feasibility by means of a particular scenario, over which we studied strategies considering price, RA and cost of change (HO). The results show that an appropriate selection of the utility function criteria leads to rather different performance results, so this is an aspect that needs to be thoroughly considered when establishing the goodness of the assessed performance. In particular, the performance figures that were previously reported yield that the integration of the cost of change (HO) within the utility function brings about relevant benefits, because it leads to a significant improvement in terms of the average number of HO, while it does not jeopardise the rest of figures of merit (in fact, there were also some slight enhancements for certain cases).
Regarding future work, the main objective is to exploit the potential of the developed framework so as to assess the goodness of different distributed algorithms and procedures for access selection in highly heterogeneous networks, including the use of virtualised resources and flow management techniques. The idea would be to analyse how far their behaviour is from the optimum performance. Besides, in the analysis that has been discussed in Section 6, we have seen that there is an impact when considering different combinations of the utility function criteria; we have provided a discussion about the cost of change (HO) and its relationship with the price, but there are still other interrelations that require further analysis; this will be also considered in our future research. On the other hand, the design of the tool is rather generic and it is not rigid torn to the solver; it can be therefore used so as to mimic any potential scenario as soon as the input traces follow the appropriate format. In this sense, we will also study the impact of considering other types of traffic, having more complicated models (for instance, elastic traffic), which might probably lead to nonlinear optimisation problems. Furthermore, we will also enhance the abstractions that have been performed regarding the capacity; in this sense, we will modulate the capacity that a user gets from a resource based on the particular conditions of the link with the corresponding BS.