IMT-2020 Simulation Plans

Jose F. Monserrat (Universitat Politècnica de València)

Last week, on June the 21st 2017, ITU-R WP5D Niagara Falls meeting concluded with little advances, but at least a clear definition of the agenda to standardize IMT-2020 Radio Interface Technologies, the 5G. Even being the radio community eager to say that 5G is just around the corner, the 5G label will be provided by ITU-R by October 2020, likely more than two years after the first “5G” networks will be deployed.

The full process includes the following milestones:

  • March 2016: the invitation to propose Radio Interface Technologies (RITs) was released.
  • July 2019: deadline for the reception of candidate proposals.
  • February 2020: all evaluation reports from external evaluation groups must be received.
  • June 2020: ITU-R will provide the  key characteristics of 5G technologies.
  • October 2020: ITU-R will finish the RIT specification recommendations.

In the meantime, what is also under way is the technological evaluation of the candidate technologies. As in previous generations, candidate radio access technologies will be proposed and a set of institutions and research centers will carry out a comprehensive evaluation to see if the candidates are able or not to meet the requirements set by the ITU-R for this mobile generation. For the 5G, the requirements are still under discussion, although good hints are available already in draft form [1]. The steps in the whole procedure are described in the figure below.

ITU-R process

Figure 1. Detailed procedure of the IMT-2020 evaluation [2].

With respect to the simulation of the 5G candidates, ITU-R is currently working in a document, the ITU-R M.[IMT-2020.EVAL] report, to be released by November this year, in which all details for simulations will be included. Evaluations will be performed in strict compliance with the technical parameters provided by the proponents and the evaluation configurations specified for the test environments in this IMT-2020.EVAL report. What we already know (there is a draft version since June 2017) is the set of Key Performance Indicators (KPI) to be evaluated and some details on the test environments and network layout for simulations. As one of the most interesting novelties, there will be two new environments, as compared with the IMT-Advanced process, in an urban macrocellular deployment:

  • Urban Macro–mMTC: an urban macro environment targeting continuous coverage focusing on a high number of connected machine type devices.
  • Urban Macro–URLLC: an urban macro environment targeting ultra reliable and low latency communications.

The second one is likely to focus on Vehicle-to-Vehicle (V2V) communications, since this seems to be the most significant service that will define the 5G [3]. The characteristics of the propagation models that will be used for the modeling of this V2V communication and how the shadowing effects in this type of communication will be taken into account are still to be seen. The right modelling of these two aspects is fundamental to have an accurate assessment on the performance of the RIT candidates [4][5], so it is requiring long discussions within the WP 5D.

The following principles are to be followed when evaluating RIT for IMT‑2020:

  • Evaluations of proposals can be through simulation, analytical and inspection procedures.
  • Evaluations through simulations contain both system-level and link-level simulations. Independent evaluation groups may use their own simulation tools for the evaluation.
  • In case of evaluation through analysis, the evaluation is to be based on calculations which use the technical information provided by the proponent.
  • In case of evaluation through inspection the evaluation is to be based on statements in the proposal.

The IMT-2020 submission and evaluation process is is guided by Resolution ITU-R 65. At this point in time, interested groups can still apply to become external evaluators of the RIT IMT-2020 candidates. So far the following groups have been accepted:

  • 5G Infrastructure Association
  • ATIS WTSC IMT-2020 Evaluation Group
  • ChEG Chinese Evaluation Group
  • Canadian Evaluation Group
  • Wireless World Research Forum
  • Telecom Centres of Excellence, India
  • The Fifth Generation Mobile Communications Promotion Forum, Japan
  • TTA 5G Technology Evaluation Special Project Group

Now the main question is whether there will be more than one candidate technology or not…


[1] ITU-R SG05 Contribution 40, “Draft new Report ITU-R M.[IMT-2020.TECH PERF REQ] – Minimum requirements related to technical performance for IMT-2020 radio interface(s)”, February 2017.

[2] ITU-R IMT.2020 Contribution 2, “Submission, evaluation process and consensus building for IMT-2020”, June 2016.

[3] Calabuig, Jordi; Monserrat, Jose F; Gozalvez, David; Klemp, Oliver; “Safety on the roads: LTE alternatives for sending ITS messages”, IEEE Vehicular Technology Magazine, vol. 9, no. 4, pp. 61-70, 2014.

[4] Monserrat, JF; Fraile, R; Rubio, L; “Application of alternating projection method to ensure feasibility of shadowing cross-correlation models”, Electronics Letters, vol. 43, no. 13, 2007.

[5] Monserrat, J; Fraile, R; Cardona, N; Gozalvez, J; “Effect of shadowing correlation modeling on the system level performance of adaptive radio resource management techniques”, Wireless Communication Systems, 2005. 2nd International Symposium on, 2005.

[6] IMT-2020 submission and evaluation process webpage.



Frequency bands for 5G Systems

Ki Won Sung (KTH Royal Institute of Technology)

5G systems are expected to provide a wide range of services including enhanced mobile broadband (eMBB), massive machine type communications (mMTC), and ultra-reliable low latency communications (URLLC). Providing the enhanced and new services would require more frequency spectrum to be used for 5G. Furthermore, the characteristics of each service may fit to different frequency bands ranging from low to high bands, e.g., from sub-1GHz up to 100 GHz. Therefore, it is important to identify the frequency bands both available and suitable for 5G systems.

In the international level, the most important decisions on the spectrum allocation are made in World Radiocommunication Conference (WRC) which is organized by International Telecommunication Union (ITU) in every four years. The latest WRC was held in 2015 (WRC-15), and the next one will be in 2019 (WRC-19).  In WRC-15, an agreement was made on a WRC-19 Agenda Item (1.13) to consider the identification of frequency bands for the future development of International Mobile Telecommunications (IMT), which includes possible additional allocations to the mobile service on a primary basis, in accordance with Resolution 238 (WRC-15). It entails the appropriate sharing and compatibility studies for a number of bands between 24-86 GHz in time for WRC-19. The details of the frequency bands for studies can be found in Figure 1 [1].


Figure 1: Frequency bands for studies for IMT in ITU-R until WRC-19 [1].

Apart from the higher frequency bands studied for WRC-19, parts of frequency bands around 3400-3800 MHz have obtained interest in various regions of the world. In Europe, the entire band of 3400-3800 MHz is harmonized for mobile/fixed communications networks (MFCN) according to an ECC decision [2]. The Radio Spectrum Policy Group (RSPG) considers 3400-3800 MHz to be the primary band for the introduction of 5G services provided that the frequency band is already harmonized and it offers wide channel bandwidth of 100 MHz or more [3]. In Japan, Ministry of Internal Affairs and Communications published the national report “Radio Policies Towards 2020s”, which selected 3.6-4.2 GHz, amongst others, as national candidate for 5G [4]. China also studies the availability of 3.3-3.4 GHz, and has announced a 5G trial in the 3.4-3.6 GHz band [4]. In the USA, Federal Communications Committee (FCC) has established Citizens Broadband Radio Service (CBRS) in the 3550-3700 MHz on a shared and technology-neutral basis. CBRS employs a three-tiered spectrum authorization framework to accommodate a variety of incumbent federal and commercial non-federal users on a shared basis. Specifically, it has three hierarchies of spectrum users: incumbent access, priority access, and general authorized access [5]. In addition to the 3550-3700 MHz, the “Mobile Now” Act proposes further studies on 3100-3550 MHz and 3700-4200 MHz that could offer additional 500 MHz bandwidth in the 3.5 GHz range [6].



[1] ICT-671680 METIS-II, Deliverable D3.2 Version 1, “Enablers to secure sufficient access to adequate spectrum for 5G”, June 2017.

[2] ECC/DEC/(11)06, “Harmonised frequency arrangements for MFCN operating in the bands 3400-3600 MHz/3600-3800 MHz”, December 2011.

[3] RADIO SPECTRUM POLICY GROUP Opinion on spectrum related aspects for next-generation wireless systems (5G), “STRATEGIC ROADMAP TOWARDS 5G FOR EUROPE”, November 2016.

[4] A GSA Executive Report from Ericsson, Intel, Huawei, Nokia and Qualcomm, “The case for new 5G spectrum”, November 2016.

[5] US Federal Communications Commission (FCC), “Amendment of the Commission’s Rules with Regard to Commercial Operations in the 3550-3650 MHz Band”, April 2015.

[6], March 2017.

5G System Level Aspects of Operations in Higher Frequencies Regimes

Michał Maternia (Nokia)

In recent years, wireless communication systems exploit higher and higher frequency ranges, in order to find necessary radio resources to serve the raising traffic demand. Some WRC-15 bands selected for studies in the context of 5G, as well as the 28 GHz band chosen in large markets for 5G deployments, span over hundreds of MHz and allow straightforward handling of bandwidth hungry broadband services [1]. However, propagation of radio waves in millimetre waves region, differs strongly from radio propagation at traditional cellular frequencies. Firstly, radio signals face much higher path losses and, secondly, the role of propagation phenomena such as, diffraction or reflection for most materials, is much less prominent. This leads inevitably to exploitation of massive MIMO antenna systems at millimetre waves bands.

As described in previous blog entries (cf. MIMO techniques and architectures for millimetre wave mobile communications), massive MIMO antennas that exploit beamforming are used at higher frequencies to cope with increased path losses. A critical factor for wideband equipment is the power consumption of digital-to-analog converters (DAC) that scales with the sampling rate and the number of bits per sample. For this reason, at higher frequencies analog beamforming solutions with low number of DACs are preferred over digital beamformers that require a separate DAC for each processed transmission stream. After analog beamforming, the radio beam offers high antenna gain (tens of dB) and is much narrower (several degrees for 3 dB beam width) comparing to the output of contemporary sector antennas. This narrowness brings several novel system level design implications that were not present in the previous cellular generations.


Figure 1: Operations with sector antennas (left) and mMIMO with analog beamforming (right)

Initial access

During transition from idle to connect mode (e.g., when we want to use some service after longer inactivity period) the radio network (gNB) and user equipment (UE) need to determine a spatial direction of the suitable communication link, which boils down to the selection of one out of several potential radio beams (so called P-1 procedure [2]). In order to facilitate this selection, 3GPP specify details of selected radio time slots that will be used by gNB to sweep several spatial directions with transmissions of so called synchronization signal blocks. In each block (spanning over 4-6 OFDM symbols), a transmission over one beam direction will consist of synchronization signals and broadcast information that needs to be obtained before exchange of initial access information [3]. In other slots that are used to detect initial access messages (Random Access Chanel (RACH)), gNB will tune it’s receive antennas to sweep receive beams. If there is a fixed time relation between the transmission and reception at a given beam, the UE will be able to calculate it based on system information broadcasted in synchronization signal blocks, and will use appropriate timing for transmission of initial access preamble to indicate suitable radio beam. Alternative solutions are also possible, e.g., in the case of no beam correspondence at the gNB, UE may repeat the initial access preamble for several transmission opportunities.

Mobility management

In previous cellular generations, UE had to monitor radio signals from neighbouring radio cells to facilitate potential switch of the serving cell. When the radio signals are limited to narrow beams a different approach is needed. For start, 3GPP is working on the new procedures for beam tracking/refinement at the gNB side caused by e.g., UE movement (P-2) or tracking/refinement of UE beams that is needed e.g. due to UE rotation (P-3) [4]. Further on, instead of reporting measurements for the best cells, UE will report measurements for a number of best beams that were detected. Additional standardization efforts are put for development of a mechanisms that will be used to recover after beam failure:

  • detection of the beam failure,
  • identification of the new beam candidate,
  • beam failure recovery request transmission, and
  • response for the beam failure recovery request.

Resource management

Due to the high cost of radio processing chains of gNBs operating at higher frequencies, digital MIMO operations (including higher order MIMO i.e. transmission of multiple data streams) and frequency division multiplexing of the scheduled users, are not straightforward. Therefore the most accessible scheme is a beamformed transmission toward single user in a single slot. To enable efficient resource and interference management in 5G, 3GPP work to design of specific reference signals and feedback types. Because of a high directivity of radio transmissions, a cross link interference mitigation is needed and in 5G at least the information of the intended UL/DL transmission direction configuration is exchanged at the backhaul (other methods, including gNB-gNB and UE-UE measurements, are being investigated [5]). When operating at higher frequencies, UE need to track specific reference signals for phase tracking, to avoid additional phase noise errors resulting from drifts in the local oscillators. Finally, as cloud deployments are gaining more and more traction, more centralized scheduling mechanisms will also come into play in 5G. This is also reflected in 3GPP decision for the split of gNB into centralized and distributed unit [6].


[1] METIS-II D3.1 “5G spectrum scenarios, requirements and technical aspects for bands above 6 GHz”, ICT-671680 METIS-II Deliverable 3.1, Version 1.

[2] 3GPP, Study on New Radio Access Technology Physical Layer Aspects (Release 14), 3GPP TR 38.802, March 2017.

[3] 3GPP, Summary of discussion on SS block composition, SS burst set composition and SS time index indication, 3GPP TDoc R1-1706534, April 2017.

[4] 3GPP, WF on Framework of Beam management’ 3GPP TDoc R1-1703523, February 2017.

[5] 3GPP, WF on cross link interference mitigation enablers, 3GPP TDoc R1-1706222, April 2017.

[6] 3GPP, Study on new radio access technology: Radio access architecture and interfaces (Release 14), 3GPP TR 38.801, March 2017.

5G Architecture Design Verification

Heinz Droste (Deutsche Telekom AG)

The main goal of the 5GPPP project 5G NORMA (Novel Radio Multiservice adaptive network Architecture) is to propose a multi-service mobile network architecture that adapts the use of the mobile network resources to the service requirements, the variations of the traffic demands over time and location, and the network topology. Basics of those novel 5G architectures can be found in [1, chapter 3].

5G NORMA key innovations are

  • Adaptive (de)composition and allocation of mobile network functions to optimize performance on a per-service and per-scenario basis
  • Multi-service and context-aware adaptation of network functions to efficient support of multiple services
  • Mobile network multi-tenancy to reduce deployment and operational costs (CAPEX and OPEX)
  • Software-based mobile network control to allow flexible operation and
  • Pooling of network functions for joint optimization of mobile access and core network functionalities in order to achieve significant performance improvements.

At the beginning of the project, use cases have been defined that, with their related requirements, help on the definition of the 5G NORMA architecture, as well as proposing scenarios combining several uses cases that, all together, challenge the key innovations that 5G NORMA claims. In order to meet all requirements during project runtime, a two-step architecture design iteration is executed.

Verification Methodology

Each architecture design iteration is accompanied by comprehensive verification activities that check fulfillment of different requirements from system point of view based on generic 5G services as defined in METIS [2].

Applied evaluation criteria are depicted in Figure 1.


Figure 1: 5G NORMA Evaluation criteria.

Arranging the verification of requirements and KPIs, more tangible roll out case studies have been defined that provide a link between evaluations on technical and economic feasibility. In addition, these roll out case studies may reveal potential show stoppers and challenges that become visible when putting the developed system into practice. For a typical urban sample area in London, three so-called evaluation cases have been created. A baseline evaluation case emulates the development of extended mobile broadband (eMBB) radio access networks (RAN) in the sample area for the time span between 2020 and 2030. A multi-tenant evaluation case expands the view from one up to four mobile operators and identifies benefits of 5G NORMA multi-tenant compared to single operator networks. Finally, a multi-service evaluation case adds to the sample RAN network slices for massive machine type communications (mMTC) and vehicular to anything communications (V2X) that includes ultra-reliable low latency services (URLLC). At current design iteration intermediate verification results for eMBB performance, functional, operational and security requirements as well as soft-KPI fulfillment are compiled in [3].

Intermediate results

Performance requirements for eMBB incorporate peak data rates, different kind of transmission latencies, network capacity, and network behavior at increasing device velocity (mobility). Some of these requirements (e.g. peak data rates and mobility) are not in scope of 5G NORMA and respective performance results have to be collected from other research projects. The baseline evaluation case revealed that most of the MBB traffic in future as in the past will be carried by WiFi. Nevertheless future spectrum extensions at macro sites will lead to bottlenecks with antenna panel deployment that could hopefully be mitigated by 5G NORMA multi-tenant bare-metal sharing.

Network capacity is measured as data volume that the network is capable to carry during the busy hour. We could show that under realistic assumptions and an assumed annual increase of traffic densities by 20%, macro cells with their limited spectrum efficiency would not be able to provide sufficient capacity to cope even with this moderate traffic growth. Hence small cell layers at lower and high frequency bands will be needed. Because of limited line-of-sight cell ranges the contribution by small cell layers at high frequency bands will be limited by their capability to offload the macro layer. Hence, it can be concluded that from capacity perspective there is no need for spectrum efficiency improvements for mmW radio nodes.

Many of the functional requirements identified for the generic 5G services are already fulfilled at the current stage of 5G NORMA design iteration [3]. Further, since selected requirements, e.g., functionality for controlling device networks, ability to keep track of devices, and ability to discover the topology of vehicular-to-vehicular (V2V) networks are rather in the scope of other 5G-PPP projects, 5G NORMA will consolidate an overview of possible solutions in the final report.

According to [4], the considered operational requirements up to now are mainly related to deployment of multi-tenant and multi-service networks. Enablers for multi-tenant dynamic resource allocation, service specific and context aware adaptation and placement of network functions as well as dynamic network monitoring are or will be investigated until the end of the project.

Assessing the threats and risks present in a complex system and the compliance with security requirements of the system is a difficult task and can must be done by (partially subjective) expert assessments rather than by formal verification. Regarding tenant isolation, there exist the usual risks that tenant SLAs and data confidentiality, integrity, and availability are violated. In addition, tenants must trust their mobile service providers (MSP), and MSP must trust potential third-party infrastructure providers (InP) that the functions hosted byt external platforms are secured appropriately. Besides, new security concepts described in [3] and novel concepts provided by the research community as a whole may be integrated into 5G NORMA architecture.

Check of ‘Soft-KPIs’ in general shall make sure that results of the architecture design are mature for implementation. In our verification, we could prove that interfaces between service management and management and orchestration are able to carry all information needed for automated processing of service deployment requests according to so called offer types. Offer types distinguish between the degrees of service control that is exposed to a tenant. Verification of scalability of centrally arranged management and control functions builds on methods used to evaluate scalability of SDN controllers. Further, the introduction of several network instances (so-called network slices) increases complexity, e.g., by multiplying many of the existing network operability processes. While this aspect can be tackled by increased levels of automation, the number of feasible network slices is rather going to be limited by the scarce bottleneck resources (e.g. spectrum, backhaul capacity) [5]..

Next steps

Topics to be addressed until the end of the project are listed in the table below.

Topics identified for next architecture design iteration phase.
Topic Check of fulfilment of
Update of performance requirements (mMTC, V2X, e2e latency) Performance requirements
Multi-connectivity Performance requirements
Opportunities for RAN sharing (virtual, bare metal & spectrum resources) Operational requirements
Backhaul aspects Operational requirements
Adaptation and placement of virtual network functions (VNFs) Operational requirements
Investigation of network programmability Functional requirements
Investigation of QoE based routing and network agility Functional requirements
Investigation of Edge function mobility Functional requirements
Assessment of mobility concepts Functional requirements
Protocol overhead analysis Functional requirements
Reliability concepts, reliability prediction Functional requirements
Update of security requirements Security requirements
Internal and external interfaces, comparison 4G/5G interfaces Soft KPI
Demonstrator learnings Soft KPI
Trial runs implementing multi-tenant and multi-service networks Soft KPI
Economic evaluations (WP2 part of verification) Economic feasibility


[1] A. Osseiran, J. F. Monserrat and P. Marsch, “5G mobile and wireless communications technology,” Cambridge University Press, 2016.

[2] METIS D6.6 “Final report on the METIS 5G system concept and technology roadmap,” ICT-317669 METIS Deliverable 6.6, Version 1, May 2014

[3] EU H2020 5G NORMA, “D 3.2: 5G NORMA network architecture – Intermediate report”, Jan. 2017

[4] EU H2020 5G NORMA, “D 3.1: Functional Network Architecture and Security Requirements”, Dec. 2015

[5] C. Mannweiler et al., “5G NORMA: System Architecture for Programmable & Multi-Tenant 5G Mobile Networks”, submitted to European Conference on Networks and Communications (EUCnC), March 2017.

MIMO techniques and architectures for millimeter wave mobile communications

Paolo Baracca (Nokia Bell Labs)

Exploiting high carrier frequencies for mobile communications is a fundamental enabler to cope with the ever-increasing throughput demand. In fact, millimeter wave (mmWave) bands offer huge portions of spectrum that can be used to deliver very high data rates from or toward the mobile users. When compared to current 4G systems working at frequencies below 6 GHz, mmWave base stations and users are expected to be equipped with many more antennas, which is viable due to the reduced antenna element size. Therefore, multiple-input multiple-output (MIMO) precoding in the form of either beamforming or spatial multiplexing will be adopted. However, which MIMO scheme, at these high carrier frequencies, represents the best tradeoff in terms of energy consumption, cost and performance, is still under investigation. As explained in [1, Chapter 6], there are three main MIMO architecture alternatives for mmWave systems: analog, fully-digital and hybrid beamforming.

 Analog beamforming

Users operating at mmWave frequencies are usually more noise- than interference-limited and, therefore, good performance can be obtained by applying a simple beamforming technique where: users are multiplexed in the time domain, only one data stream per time slot is sent and all the antennas are used to provide array gain to compensate the high path-loss characterizing mmWave bands [2]. As illustrated in Figure 1(a), this scheme can be realized, for instance at the transmitter side, by using a fully analog beamforming architecture, where only one digital-to-analog converter (DAC) and one radio frequency (RF) chain are required. Namely, analog beamforming is performed in the RF analog domain, for instance, by using M phase shifters, each one mapped to an antenna element. This architecture allows the transmitter to generate a wideband beam that focuses the power toward a specific direction to increase the signal to noise ratio (SNR) at the receiver. Nevertheless, a fully analog beamforming architecture has also some limitations such as the fact that beams are wideband and spatial multiplexing is not allowed.

 Fully-digital beamforming

System performance can strongly improve by using a fully-digital beamforming architecture, as shown in Figure 1(b), with one RF chain per antenna element. By implementing the precoder in the digital baseband (BB) domain, this transceiver allows to both a) implement different precoders on different sub-bands (with the aim of compensating the frequency selectivity of the channel) and b) perform multi-stream transmission, for instance to simultaneously serve two line-of-sight users that are physically separated. Due to the large number of RF chains required, the main drawbacks of this architecture are the high cost and energy consumption. More effort is needed to make this architecture feasible at mmWave. However, some preliminary studies have already shown interesting results into this direction, for example by using very low resolution analog-to-digital converters to strongly decrease the cost/energy consumption of the transceiver [3].

 Hybrid beamforming

A compromise between the two architectures described above is the hybrid beamforming architecture shown in Figure 1(c). Although different configurations are possible [4, Figure 2], the general idea consists in equipping the device with P RF chains, being P much smaller than the number of antennas, i.e., P<<M, thus still allowing some precoding flexibility but at a reduced cost and power consumption. Hybrid beamforming architectures can be used in different complementary ways. For instance, multiple beams in the analog domain can be used to send a data stream to a specific user whose channel has few strong paths, i.e., the multiple beams are used to compensate the multipath fading and, in turn, to increase the SNR. As an alternative, spatial multiplexing to serve multiple users can be implemented with a joint design of a) wideband beams in the RF analog domain and b) some more advanced per-sub-band precoding, like zero-forcing, in the digital BB domain. Several works have already shown that hybrid beamforming can achieve the performance of fully-digital beamforming in scenarios with one base station serving one or multiple users. However, the gap among these two architectures tends to increase when RF impairments are included, showing that the performance of hybrid beamforming can significantly vary depending on the specific hardware implementation [5]. Recently, we have performed a rather comprehensive system level analysis comparing these beamforming options with many interfering base stations that are serving users in different channel conditions, thus requiring also some user selection algorithms [6, 7]. Our results, targeting dense urban scenarios and taking also hardware impairments into account, have confirmed that hybrid beamforming represents a good tradeoff for mmWave mobile communications, being able to obtain performance close to fully-digital beamforming in many relevant cases.


 Figure 1: MIMO architectures for mmWave mobile transmission: (a) analog beamforming, (b) fully-digital beamforming and (c) hybrid beamforming.


[1] A. Osseiran, J. F. Monserrat and P. Marsch, “5G mobile and wireless communications technology,” Cambridge University Press, 2016.

[2] S. Sun, T. S. Rappaport, R. W. Heath, A. Nix and S. Rangan, “MIMO for millimeter-wave wireless communications: beamforming, spatial multiplexing, or both?,” in IEEE Communications Magazine, vol. 52, no. 12, pp. 110-121, Dec. 2014.

[3] R. W. Heath, N. González-Prelcic, S. Rangan, W. Roh and A. M. Sayeed, “An overview of signal processing techniques for millimeter wave MIMO systems,” in IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 3, pp. 436-453, Apr. 2016.

[4] R. Méndez-Rial, C. Rusu, N. González-Prelcic, A. Alkhateeb and R. W. Heath, “Hybrid MIMO architectures for millimeter wave communications: phase shifters or switches?,” in IEEE Access, vol. 4, pp. 247-267, Jan. 2016.

[5] A. Garcia-Rodriguez, V. Venkateswaran, P. Rulikowski and C. Masouros, “Hybrid analog–digital precoding revisited under realistic RF modeling,” in IEEE Wireless Communications Letters, vol. 5, no. 5, pp. 528-531, Oct. 2016.

[6] S. Gimenez, S. Roger, D. Martín-Sacristán, J. F. Monserrat, P. Baracca, V. Braun and H. Halbauer, “Performance of hybrid beamforming for mmW multi-antenna systems in dense urban scenarios,” in Proc. IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Valencia (Spain), Sep. 2016.

[7] S. Gimenez, S. Roger, P. Baracca, D. Martín-Sacristán, J. F. Monserrat, V. Braun and H. Halbauer, “Performance evaluation of analog beamforming with hardware impairments for mmW massive MIMO communication in an urban scenario,” MDPI Sensors, 16(10):1555, Sep. 2016, doi: 10.3390/s16101555.

Revision of 5G dimensions, use cases and requirements

Mikael Fallgren and Afif Osseiran (Ericsson)

The most central use cases and requirements on 5G capabilities are still being finalized both in ITU [1] and 3GPP [2]. There seems to be an agreement regarding the main use cases to be supported, illustrated in Figure 1, though additional use cases may naturally enter and evolve later in time as well. These main use cases span three different dimensions: enhanced mobile broadband, massive machine type communications and ultra-reliable low latency communications.

Enhanced Mobile Broadband (eMBB): The increasing market demand that also drove 3G and 4G mobile networks, i.e. the extended support of conventional Mobile Broadband (MBB) through improved peak/average/cell-edge data rates, capacity, and coverage, is still one of the main driving forces behind 5G – referred to as eMBB. The main requirements for eMBB in 5G networks are:

  • peak data rates: 20 Gbps in Downlink (DL), 10 Gbps in Uplink (UL),
  • user experienced data rates (5th percentile user throughput): 100 Mbps (DL), 50 Mbps (UL),
  • area capacity: 10 Mbps/m2 (indoor hotspot),
  • user plane latency: 4 ms (average one-way).

Massive Machine Type Communications (mMTC): The envisioned 5G Internet of Things (IoT) scenario with tens of billions of connected devices and sensors has relaxed data rate and latency demands, while the mMTC scenario at the same time has strict requirements in:

  • connection density: 106 devices/km2,
  • coverage: 164 dB Maximum Coupling Loss (MCL),
  • device battery life: 10-15 years.

Ultra-Reliable Low Latency Communications (URLLC): Emerging critical applications such as industrial internet, smart grids, infrastructure protection, remote surgery, and Intelligent Transportation Systems (ITS) have very strict latency and reliability requirements. For this ultra-reliable and low latency area, the relevant 5G requirements are:

  • user plane latency: less than 0.5 ms (one-way UL and DL),
  • user plane latency: less than 1 ms and reliability of 99.999% (one-way UL and DL),
  • control plane latency: tens of ms,
  • mobility interruption time: 0 ms.


Figure 1. The main 5G use cases and applications


[1] ITU-R, IMT Vision – Framework and overall objectives of the future development of IMT for 2020 and beyond, Recommendation ITU-R M.2083-0, September 2015.

[2] 3GPP, Study on scenarios and requirements for next generation access technologies, 3GPP TR 38.913, October 2016.


Report on standardization in 3GPP – Release 14 status

Jose F. Monserrat (Universitat Politècnica de València)

The specification of the 3GPP Release 14 began in September 2014 and marked the beginning of the New Radio (NR) specification, which will be the 3GPP candidate to 5G, as defined by the ITU-R for the IMT-2020 family of standards. The Release 14 is near its conclusion, as the definition of protocols (stage 3) is expected to be completed in June 2017, with the subsequent revisions to correct problems and bugs.

Definitely, the key input of Release 14 is the start of work on the specification of a new RAT non-backward compatible with LTE-A and with parallel evolution. This radio technology, which is known as New Radio, has started with a study item in Release 14 on scenarios and requirements, which began in December 2015. To date this study item has defined several important aspects of what shall be the proposal from 3GPP for 5G. In parallel, and from the second quarter of 2016, they are also studying the most appropriate technological solutions to meet the requirements marked. However, it is not going to be until the Release 15, when this NR is specified, expecting its realization in phase 1 in the second half of 2018.

However, Release 14 is much more than the beginning of the 5G, and there are more than 30 studies affecting aspects as diverse and as important as the following: V2X communications, improved location services, reducing latency in LTE, the separation of the user plane and control (so important for virtualization), improvements in the use of unlicensed spectrum, the extension relaying schemes for communication between machines, the carrier aggregation between bands, various improvements in broadcasting, or the extension in the number of antennas to more than 16. Because its special relevance, this post extends the description of three aspects, improved latency for LTE, the V2X communication aspects and Licensed-Assisted Access.

Improved latency for LTE

The item entitled “The study on techniques for LTE Latency reduction”, was finalized in June 2016 in the technical report 3GPP TR 36.881, giving terminate this study item from that point.

This improvement work mainly focused on improving semi-persistent scheduling (SPS), handover latency and reduction of the TTI length.

Regarding the former, it was interesting to enable SPS solution with 1 TTI period. This greatly reduced signalling for users with a high demand for resource availability. Regarding improving handover latency, the possibility to make a handover without a new RACH process was studied as well as maintaining connected the source cell throughout the handover period. Although both solutions were stressed as promising, it was not addressed the feasibility of these ideas. Finally, with regards to the reduction of TTI length, the simulation results were not promising for many services, and it was concluded not to reduce the TTI length below 1 slot, i.e. 0.5 ms.

V2X communications

Enabling direct communications between vehicles within the cellular system is a key to deal with the necessary security for the autonomous cars deployment. The standardization of V2X communications began in Release 13 with a study item on the requirements of ITS services. There are several specific aspects of this type of communication that make it particularly complex, including the relative lack of synchronism between the terminals and the high speed of transmitter and receiver, which requires a higher density of pilots to enable a proper coherent detection. In Release 14 these issues are addressed within the study item “Support for V2V services based on LTE sidelink”.

Although the study item has not yet been closed, the radio aspects are considered already completed being included in 3GPP technical report TR 36.785, while operational procedures are expected to be completed by March 2017.

The system is expected to operate with different bandwidths, including 10 MHz, using a dedicated carrier for V2X communications and the use of GNSS from satellites for time synchronization.

Two configurations have been defined. In configuration 1, the system is fully distributed, both for interference management and for scheduling, and it was defined a new way of scheduling, mode 4, which allows sensing and semi-persistent scheduling. Resource allocation also depends on the geographic information.

In configuration 2, mode 3 scheduling is used, which allows eNBs to assist in decision-making regarding interference management and scheduling, by using specific signalling over the Uu interface. In short, the eNB determines the set of resources that vehicles distribute dynamically.

Licensed-Assisted Access 

3GPP already included in Release 13 the possibility of transmitting in downlink in secondary cells operating in unlicensed spectrum, with control of a main cell operating in licensed spectrum. It is what is known as Licensed-Assisted Access.

LAA improvements are included in Release 14 within the study item known as “Enhanced LAA for LTE”. In late 2016, major contributions have focused on the necessary changes within the core of the wireless protocols to support this functionality, with modifications especially in the RRC and MAC protocol and physical capabilities of the user equipment and base stations. Other aspects are under development, and it is estimated that enough progress on the LAA issue won’t be expected until mid-2017.

The main complexity of LAA lies in the coexistence with other protocols in unlicensed bands, such as the IEEE 802.11 family. Therefore, LAA should include procedures for listen-before-talk (LBT) and discontinuous transmission schemes to allow lower occupancy of possible channel. Furthermore, Release 14 will include the transmission in LAA for the uplink also, so that signalling must be highly compressed compared to conventional operation.

Massive MTC – Reducing the Physical Layer Overhead through Multi-Carrier Compressed Sensing based Multi-User Detection (MCSM)

Carsten Bockelmann (University of Bremen)

In the 5G book we focused on the overarching challenges to reduce the signaling overheads from protocol level to the physical layer design. Several ideas were discussed to resolve the problems of todays access reservation strategies to enable a truly massive access. However, todays systems mostly rely on coherent detection strategies that require knowledge about the channel state. Therefore, efficient channel estimation is a very important aspect to reduce the physical layer overhead in case of a massive number of users. In principle, the channel state information of every user communicating with the base station must be estimated which incurs a significant overhead in massive MTC with very small payloads (think temperature sensors, status messages, etc.). Furthermore, if large coverage areas are targeted high quality channel estimation requires significant resources for pilots to ensure good Signal-to-Noise ratios by noise averaging. Therefore, an alternative approach is called for.


Figure 1 – Multi-Carrier Compressed Sensing Multi-User Detection (MCSM) concept and components.

Taking the lessons learned and summarized in current research and the 5G book we proposed the so-called Multi-Carrier Compressed Sensing based Multi-User Detection (MCSM) as a physical layer concept for massive MTC [1, 2, 3]. MCSM is comprised of three main building blocks: (i) a multi-carrier waveform; (ii) Compressive Sensing Multi-User Detection (CS-MUD) and (iii) non-coherent communication.

The many advantages of multi-carrier waveforms are well discussed and need not to be repeated here. For MCSM two things are of importance: the realization of narrowband sub-channels in larger spectrum bands and flexible allocation of such narrow-band sub-channels. Specifically, narrowband sub-channels are required to enable easy differential modulation as explained below.

The second building block is Compressed Sensing based Multi-User Detection (CS-MUD) which serves as an activity detector in this concept and simultaneously separates the multi-user data streams that are superimposed through CDMA-like spreading [4]. Thus, CS-MUD reduces the protocol overhead as already discussed in the book, but in contrast to previous assumptions does not estimate the user data symbols. Instead it realizes the multi-user detection and provides estimates of the differentially encoded user symbols.

Finally, the third block “differential modulation” is introduced to solve the pilot overhead problem through non-coherent detection. Non-coherent communication is a very attractive solution for several reasons. A major advantage is the avoidance of channel estimation and the incurred pilot overhead. Instead of channel estimation and equalization the data is mapped onto the phase of transmit symbols which makes it robust against phase changes caused the by the transmit channel. If the channel is non-frequency selective and constant over the frame length only the starting phase of the data symbols must be known which reduces the overhead tremendously. As already indicated, the building block “multi-carrier waveforms” is required to implement this easily in a multi-service context. Massive MTC users are served by allocating sufficiently small sub-bands within the coherence bandwidth of the channel for a single MCSM system. Then, each MCSM system only experiences a non-frequency selective single-tap channel well suited for non-coherent modulation.

Of course, it is well known that non-coherent modulation suffers performance losses equivalent to a 3 dB SNR loss, but with advanced demodulation concepts this loss can be compensated in part [5]. So, fitting the theme of “simple transmitter” and “complex receiver” complexity is once again shifted to the base station for massive MTC uplink communication.


Figure 2 – Narrowband MCSM systems hopping in frequency.

A downside of narrowband MCSM channels is the dependence on channel quality as illustrated in Fig. 2. In the unlucky case that a user experiences a “bad” channel in the allocated frequency band decoding is nearly impossible. Therefore, the MCSM concept includes frequency hopping to allow for frequency diversity in one frame. Multiple MCSM systems can hop (pre-planned or randomly) in the allocated massive MTC resources as shown in Fig. 2 and thereby achieve a more stable performance. However, hopping incurs additional overhead. The starting phase of the differentially encoded user symbols must be known after each hop which is equivalent to another “pilot”. Hence, careful system design is required to balance overhead and diversity gains appropriately.

Finally, it is interesting to have a look at the performance of the MCSM concept depending on the allocated bandwidth. Fig. 3 shows the frame error rate after decoding of a half-rate convolutional code given different per user data rates [2]. Each rate corresponds to “narrowband” bandwidth that is allocated (fixed D-QPSK modulation and code rate). Thus, with increasing data rate the coherence bandwidth of the channel (approx. 300 kHz here) is increasingly violated leading to additional decoding errors. It is quite clear that such a system is highly dependent on the coherence bandwidth and chosen data rates (bandwidth) and must be carefully designed. Still, the general concept shows promising performance with low physical and medium access layer overheads. Also, we could show a first practical evaluation of MCSM in indoor contexts to demonstrate the practicality of the approach [6]. Surly, depending on the cell sizes, deployments, and so on the MCSM parametrization requires careful adaptation in a larger system context like 5G.


Figure 3 – Frame error rate over SNR for different data rates. Each data rate corresponds to an MCSM system bandwidth. Increasing data rates violate the coherence bandwidth (ca. 300 kHz) of the channel.


[1]       F. Monsees, M. Woltering, C. Bockelmann, and A. Dekorsy: „Compressive Sensing Multi-User Detection for Multi-Carrier Systems in Sporadic Machine Type Communication,” IEEE 81th Vehicular Technology Conference (VTC2015-Spring), Glasgow, GB, May 2015.

[2]       F. Monsees, M. Woltering, C. Bockelmann, and A. Dekorsy: „A Potential Solution for MTC: Multi-Carrier Compressive Sensing Multi-User Detection,” The Asilomar Conference on Signals, Systems, and Computers, Asilomar Hotel and Conference Grounds, USA, November 2015.

[3]       F. Monsees, M. Woltering, C. Bockelmann, and A. Dekorsy: “Multicarrier, Multi-User MTC System using Compressed Signal Sensing,” Paten application, PCT W02016177815 / DE102015208344A1.

[4]       C. Bockelmann, H. Schepker, and A. Dekorsy: „Compressive Sensing based Multi-User Detection for Machine-to-Machine Communication,” Transactions on Emerging Telecommunications Technologies: Special Issue on Machine-to-Machine: An emerging communication paradigm, Vol. 24, No. 4, pp. 389-400, June 2013.

[5]       L. Lampe, R. Schober, V. Pauli, and C. Windpassinger: “Multiple-Symbol Differential Sphere Decoding,” IEEE Transactions on Communications, Vol. 53, No. 12, December 2005.

[6]       M. Woltering, F. Monsees, C. Bockelmann, and A. Dekorsy: „Multi-Carrier Compressed Sensing Multi-User Detection System: A Practical Verification,” 19th International Conference on OFDM and Frequency Domain Techniques (ICOF 2016), Essen, Germany, August 2016.


Device-to-Device Communications for 5G Networks Utilizing Social Awareness

Gabor Fodor (Ericsson Research)

Device-to-device communication (D2D) in 4G cellular networks is an important technology enabler of national security and public safety as well as vehicular-to-vehicular and vehicular-to-infrastructure communication services [1, 2]. The METIS and METIS-II projects have proposed further developed D2D technology components to meet the requirements imposed by enhanced mobile broadband, ultra-reliable and low latency services as well as massive machine type communications. These technology components include advanced network coding schemes for cellular coverage extension, network assisted multi-hop D2D communication solutions, multi-antenna techniques and employing inband full duplex transmissions [3, 4], as illustrated in Figure 1.



Figure 1: D2D communications within cellular network coverage take advantage of network assistance, while D2D communications out of cellular coverage help to provide connectivity between users in each other’s proximity.

Recently, we started a joint project between Ericsson Research, Wireless@KTH and the Automatic Control Lab of the Royal Institute of Technology (KTH) in Stockholm not only to investigate how to better support D2D communications, but also to explore the opportunities it presents in 5G networks [5, 6]. The new project, called Beyond User in the Loop: User in the Service (BUSE), aims to include users – especially of advanced smart mobile devices and D2D capable vehicles – as integral elements of the wireless infrastructure. The key observation is that such advanced wireless devices and vehicles equipped with multiple antennas and relaying capabilities can help to deliver wireless services to peer users and vehicles.

While largely responsible for the increase in traffic loads, D2D capable smart mobile devices, with their increased capabilities in processing-power, memory, multiple wireless interfaces and radios, often equipped with multiple antennas, may simultaneously now also be enlisted to reinforce the network to handle this extra demand. Taking part in the wireless infrastructure and helping to deliver wireless services can be stimulated by mobile network operators. Our current joint research with Tampere University of Technology (TUT) and the University Mediterranea of Reggio Calabria (UMRC) explores how utilizing social awareness can help to create and recognize win-win situations among infrastructure owners, service providers and end-users.

In the BUSE project and in our joint research with TUT and UMRC, we concentrate on introducing a novel layer of social awareness, which empowers the communicating devices to become autonomously deciding entities. Our main objective is thus to explore how the two domains — human social awareness and D2D-enabled proximate connectivity — may interplay to improve the resulting communications performance in terms of better system throughput as well as achieve higher levels of service quality. These attractive improvements, together with the resulting growth in user device energy efficiency, may therefore constitute the much needed incentives for the eventual user adoption of the promising D2D paradigm.

Our supportive system-level performance evaluations suggest that trusted and social-aware direct connectivity – using D2D communications – has the potential to decisively augment network performance and end-user experience. [7].


  1. Mumtaz and J. Rodriguez (Eds.), “Smart Device to Smart Device Communication”, Springer, 2014. ISBN 978-3-319-04963-2
  2. Fodor and S. Sorrentino, “D2D Communications – What Part Will It Play in 5G?”, Ericsson Research blog, July 2014.
  3. Fodor, S. Roger, N. Rajatheva, S. B. Slimane, T. Svensson, P. Popovski, J. M. B. Da Silva, S. Ali, “An Overview of Device-to-Device Communications Technology Components in METIS”, IEEE Access, Vol. 4, pp. 3288 – 3299, June 2016.
  4. Marsch, Ö. Bulakci, I. D. Silva, “Draft Overall 5G RAN Design”, METIS-II Deliverable D2.2, June 2016.
  5. Fodor, V. Ayadurai, M. Ericsson, Y. Selén, M. Prytz, “D2D Communication Pushes the Boundaries of Future Telecom Systems”, September 2016. Ericsson Research blog,
  6. Fodor, BUSE – Beyond User-in-the-Loop: User-in-the-Service,
  7. Ometov, A. Orsino, L. Militano, D. Moltchanov. G. Araniti, E. Olshannikova, G. Fodor, S. Andreev, T. Olsson, A. Iera, J. Torsner, Y. Koucheryavy, T. Mikkonen, “Towards Trusted, Social-Aware D2D Connectivity: Bridging Across Technology and Sociality Realms”, IEEE Wireless Communications, Vol. 3, Issue 4, pp. 103-111, August 2016.