[go: up one dir, main page]

WO2006121399A2 - Transmission gateway unit for pico node b - Google Patents

Transmission gateway unit for pico node b Download PDF

Info

Publication number
WO2006121399A2
WO2006121399A2 PCT/SE2006/000565 SE2006000565W WO2006121399A2 WO 2006121399 A2 WO2006121399 A2 WO 2006121399A2 SE 2006000565 W SE2006000565 W SE 2006000565W WO 2006121399 A2 WO2006121399 A2 WO 2006121399A2
Authority
WO
WIPO (PCT)
Prior art keywords
tgu
node
nbu
network
communications system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/SE2006/000565
Other languages
French (fr)
Other versions
WO2006121399A3 (en
Inventor
Anders JÄRLEHOLM
Jan Söderkvist
Jeris Kessel
Per-Erik Sundvisson
Tomas Lagerqvist
Peter WAHLSTRÖM
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commscope Technologies LLC
Original Assignee
Andrew LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Andrew LLC filed Critical Andrew LLC
Publication of WO2006121399A2 publication Critical patent/WO2006121399A2/en
Publication of WO2006121399A3 publication Critical patent/WO2006121399A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W92/00Interfaces specially adapted for wireless communication networks
    • H04W92/04Interfaces between hierarchically different network devices
    • H04W92/12Interfaces between hierarchically different network devices between access points and access point controllers

Definitions

  • the present invention relates to new transmission solutions enabling deployment of radio base stations in a UMTS network with significantly lower operational expense for transmission than current ATM-based networks.
  • the proposed solution is possible to use with existing ATM-based Radio Network Controller (RNC) and backhauls, using IP based transport for at least the last mile.
  • RNC Radio Network Controller
  • the proposed solution will also enable the use of IP transport for almost complete path, reducing cost for the more expensive transport to a minimum.
  • RAN implementations depend on ATM for communication between radio network controller RNC (and other centralized functions) and the base station, also referred to as Node B.
  • bandwidth can be reserved for each connection thus guaranteeing QoS for the end- user.
  • the synchronous physical transmission (e.g. El or STM-I) used for ATM networks also provides a good and traceable reference for Node B reference frequency clock which is needed for the radio transmitter and receiver of the Node B.
  • STM-I synchronous physical transmission
  • Node B reference frequency clock which is needed for the radio transmitter and receiver of the Node B.
  • ATM-based backhaul with its reserved bandwidth can be expensive.
  • Today the lease of transmission lines to base stations in telecommunication system networks is a major operating cost for many mobile operators.
  • the present invention targets the object of reducing transmission cost for Node B installation costs, in for particular pico Node Bs such as the OneBASE Pico Node Bs.
  • the main idea is to avoid using ATM communication with reserved bandwidth all the way from the RNC to each individual Pico Node B unit and instead using the more inexpensive IP transport for at least the "last mile".
  • the present invention fulfils this object by means of a communications system, a transmission gateway unit and a Node B unit, as defined in the appended claims.
  • the standardization forum for WCDMA UMTS also referred to as 3GPP, has attempted to define how control plane signaling (for controlling the base station), and userplane signaling, i.e. control and user data to/from the mobiles connected to the base station, shall be transported using an IP network.
  • 3GPP also discusses the possibility to design a mediation device for translating between the (current) ATM based transport system and an IP based system. However, no details are described for this device, neither how it is to be devised or where it should be located.
  • the defined standard mainly describes how messages are to be mapped on different protocols for the transport over the IP network, but they do not address all the problems which need to be solved when actually implementing IP transport in a real network, including e.g. problems with migration, i.e. when not all the equipment is designed for IP transport; detection, prevention and resolving congestion problems in the IP network; security issues etc. In reality, no system with satisfactory function and performance has hitherto been presented.
  • the invention described herein enables an operator to use IP for control and data transport to/from radio base stations, without having to migrate all the centralized functions of the network, e.g. the RNC, from ATM.
  • the solution involves the introduction of a "translator" between the ATM connect of the RNC and the IP connect of the base station, similar to the interworking function described by 3GPP.
  • the translator herein denoted Transmission Gateway Unit TGU, will not only translate between ATM and IP transport but it is also a key element in solving the inevitable problems when using IP transport over a more or less public IP network to transport control and userplane signaling to/from a remotely located radio base station.
  • the TGU and the radio base station interact to prevent, detect and resolve problems which can arise when using IP transport between the two nodes.
  • the other centralized functions e.g. the radio network controller RNC
  • the TGU will completely hide this complexity from the RNC, making the RNC believe that the Node B is still connected via ATM.
  • the invention therefore fulfils the object of providing a system which does not require modification of the RNC, even though the basis of the invention may be used also with RNCs modified for direct IP connectivity.
  • the TGU and Node B will be identical in various embodiments described herein.
  • Prevent congestion by an intelligent use of resource and bandwidth e.g. by packing, admission control, prioritization etc.
  • - Determine priority for different kind of traffic between the nodes.
  • IP networks for connecting base stations will also facilitate completely new deployment scenarios; Instead of having a few large base stations covering e.g. 6 cells from a single site, it will now be possible to deploy a lot of small base stations, each one only covering a single small cell, often denoted pico- cell, e.g. an office.
  • the currently available RNCs are often designed for the existing large "several sector sites", where the RNC can rely on the Node B itself for part of the administration of the different cells within the Node B. Deploying a large number of pico cells demand more activity from the RNC which may be difficult to implement.
  • the TGU can also be expanded to be a kind of "sub network RNC". This novel solution is also described herein.
  • a known problem with IP based networks is the path delay and variations in path delay; In particular if using wide area IP networks (e.g. public internet). This kind of delay variation may not be big issue for packet oriented services (e.g. HTTP, FTP etc) but may cause problem (e.g. poor perceived quality by the end- user) for circuit switched services like speech and video. This document shows a number of different ways of minimizing also this problem.
  • Other known problems with IP based networks are that they tend to degrade quickly when overloaded (e.g. due to congestion in a router). The solutions described herein also solves these problems.
  • ATM AIS Alarm Indication Signal
  • Nrt-VBR ATM service class Non real-time VBR
  • Ue User equipment as defined by 3GPP, e.g. a mobile phone
  • WiMax Worldwide Interoperability for Microwave Access xDSL (all types of) Digital Subscriber Lines
  • Uplink transfer of information from Node B to RNC, or from Ue to RNC via Node B
  • Downlink transfer of information from RNC to Node B, or from RNC to Ue via Node B
  • Radio link dedicated channel (DPCH) over the air interface directed to a particular Ue.
  • Radio link set a set of radio links directed to the same Ue from different cells within the same multi-sector Node B
  • Transport bearer as defined by 3GPP, i.e. signaling connection used to transfer userplane data between RNC and Node B for one user plane transport channel (either common transport channel or dedicated) or for a group of coordinated transport channels.
  • one transport bearer is implemented as a AAL2 transport bearer which is identified by its VPI-VCI-CID combination.
  • a "transport bearer" can also be mapped on IP connections for transfer to Node B over an IP network.
  • FIG. 1 schematically illustrates interconnection of an ATM network and an IP network by means of a transmission gateway unit, in accordance with an embodiment of the invention
  • Fig. 2 schematically illustrates an alternative version of the embodiment of
  • Fig. I 5 where a base station is connected in an IP network implemented over or as part of a public internet, with a firewall connected to protect an OMC;
  • Fig. 3 schematically illustrates an alternative version of the embodiment of Fig. I 5 where a geographically distributed IP network may use a mix of standard broadband internet connections or xDSL over telephone lines to reach individual remotely located Node B units;
  • Fig. 4 schematically illustrates an embodiment of the invention, in which circuit switched traffic is transported to/from Node B using ATM links, and packet switched user data is transported to/from Node B using IP networks;
  • Fig. 5 schematically illustrates NBAP over IP control plane signaling in accordance with an embodiment of the invention
  • Fig. 6 schematically illustrates simplified NBAP over IP, in accordance with an alternative control plane signaling embodiment
  • Fig. 7 schematically illustrates TGU transparent AAL2 signaling and ALCAP handling in accordance with an embodiment of the invention
  • Fig. 8 schematically illustrates an alternative solution for ALCAP handling, using ALCAP - IP - ALCAP, control plane signaling in accordance with an embodiment of the invention
  • Fig. 9 schematically illustrates NBU O&M signaling in accordance with an embodiment of the invention
  • Fig. 10 schematically illustrates external O&M signaling to TGU directly from OMC in accordance with an embodiment of the invention
  • Fig. 11 schematically illustrates control plane routing in an example with two control plane channels, in accordance with an embodiment of the invention
  • Fig. 12 schematically illustrates user plane signaling in accordance with an embodiment of the invention
  • Fig. 13 schematically illustrates user plane routing, in an example with three user plane channels in accordance with an embodiment of the invention
  • Fig. 14 schematically illustrates clock synchronization of a Node B in an IP network, in accordance with an embodiment of the invention
  • Fig. 15 schematically illustrates O&M VCC to Node B encapsulated in IP/UDP frames, in accordance with an embodiment of the invention
  • Fig. 16 schematically illustrates an O&M network in accordance with an embodiment of the invention
  • Fig. 17 schematically illustrates an alternative method for transferring
  • Alternate methods for establishing and releasing transport bearers in TGU and how to connect transport bearers to IP-UDP addresses including the method of having the configuration more or less hardcoded. How the Node B and TGU can detect, prevent and react on congestion/overload in the IP network.
  • TGU-Node B communication in parts is public.
  • RNC connects to the TGU using IP transport and TGU merely act as an intelligent traffic concentrators
  • TGU functionality is included inside the RNC, thus creating an RNC with IP interconnect to the Node Bs.
  • BLAN IP based Node B Local Area Network
  • NBU Node B units
  • This BLAN may either be a true local network, e.g. a network inside a building or a campus, alternatively the BLAN may be a geographically distributed network, more like WAN (wide area network) or even use the existing public IP network
  • RAN Radio Area Network
  • RNC radio network controller
  • O&M centralized O&M center
  • the BLAN concept allows the major part of the existing RAN (including RNC) to rely on ATM for Lj, and still use IP transport for at least "the last mile" towards Node B.
  • BLAN will use standard IP protocols for communication.
  • the transmission interface from BLAN towards the rest of the Radio Area Network (RAN) will be implemented in a Transmission Gateway Unit (TGU);
  • TGU Transmission Gateway Unit
  • ATM-RAN external
  • BLAN internal
  • the Node B units will be connected directly to the BLAN and will be designed to accept Iub (control and userplane) and O&M communication over a standard IP connection, e.g. an Ethernet port.
  • BLAN is preferably designed to only depend on available standard products such as IP/Ethernet switches and xDSL modems.
  • the design and choice of communication protocols over BLAN facilitate the use of standardized and readily available products without modification.
  • BLAN is preferably prepared for IP-based RAN transport, as specified by 3GPP in release 5.
  • IPv6 shall be supported and IPv4 is an option.
  • IPv4 will be used for local transport within the BLAN but preparations for IPv6 will be made in the NBU and TGU.
  • IP protocols it will also be possible to have the BLAN functionality implemented on existing IP network infrastructure, both WAN and e.g. office LANs, and sharing this infrastructure with other type of IP traffic.
  • the C ub i.e. the O&M interface between OMC and Node B
  • IPoA IP over ATM, as shown in section 6.1.1 above
  • the BLAN may be a local or geographically distributed network or any mix of these. In any case BLAN should only depend on available standard products such as IP/Ethernet switches and xDSL modems.
  • a local BLAN network could e.g. be a campus area or a large office complex requiring a number of Node B units.
  • a "long distance" (probably leased) ATM link will be needed from the RNC to the campus area.
  • IP transport e.g. over Ethernet, Gigabit Ethernet, WLAN, WiMax .
  • the TGU can act as an intelligent concentrator making it possible to save expensive bandwidth (ATM bandwidth) between RNC and TGU by reserving less than needed for a worst case simultaneous peak load on all Node B's connected to the BLAN.
  • ATM bandwidth bandwidth
  • the TGU will continuously monitor actual load on the ATM backhaul connection, and before accepting set-up (or reconfiguration) of an AAL2 transport bearer the TGU checks that requested additional bandwidth is available on the backhaul connection. In this way it will be possible for the operator to overbook the backhaul interconnect and get a soft degradation (new calls rejected, but no/few calls dropped or degraded) should an overload situation occur.
  • a local BLAN preferably makes use of dedicated Ethernet lines between the TGU and the different Node B's; If the Ethernet lines in the BLAN are shared with other IP traffic then this will add delay and delay variations to Iub traffic, which for some end-user services may cause some degradation of performance/quality (e.g. speech and/or video calls).
  • a geographically distributed BLAN network may use a mix of standard broadband internet connections and/or e.g. xDSL over telephone lines to reach individual remotely located Node B units, as illustrated in Fig. 3.
  • IP transport systems may also be used, e.g. WiMax.
  • For a distributed BLAN it may be possible to choose location of the TGU such the cost for transmission between RNC and TGU is minimized; In such case the requirements on "ATM backhaul trunk efficiency" from TGU to RNC may be reduced significantly.
  • Connection between xDSL modem and Node B may be a standard Ethernet connection.
  • an xDSL modem may be included inside the Node B.
  • the actual IP transmission and routing network needed to implement the communication to the Node B units can be designed to only depend on functionality and the lower level protocols (IP, DiffServ etc) already implemented and widely used in standard IP networks, implying that existing equipment and also infrastructure can be reused.
  • IP standard equipment and protocols also implies that the same network used for communication to Node B units also can be shared with other IP services, e.g. web surfing etc.
  • the BLAN can be implemented to use the public internet for communication between TGU and Node B units.
  • the main advantage is that the cost for communication to remote Node B can be very low, and in fact it will be easy for anyone to install a Node B e.g. to an existing office LAN or a broad band connection (e.g. ADSL) at home.
  • IP over IP IP over IP
  • IP over IP IP over IP
  • IP traffic is between TGU and Node B is run directly on the internet, without any VPN-like tunnels. This is a more efficient solution, and as will be shown later it is reasonably easy to secure the most sensitive parts of the TGU-Node B communication also without having to put VPN and IPsec on the complete traffic between the nodes.
  • the preferred solution for implementing BLAN over public internet is a mix where sensitive control information (NBAP, ALCAP and O&M) is ran on encrypted IPsec tunnels, while the userplane is ran a IP-over-IP tunnel but not encrypted, to save processing capacity in the gateways
  • the Node B itself terminates the IPsec tunnels on the one side.
  • the TGU and OMC network needs be protected from the public IP network by a security gateway, which terminates the encrypted IPsec tunnels.
  • the IPsec tunnels may also be terminated directly in the TGU and OMC.
  • IP over IP and IPsec tunnels also makes it possible to put the Node Bs on an office LAN, i.e. inside the firewall/security gateway (SGW) protecting this office LAN from the public internet.
  • SGW firewall/security gateway
  • the Node B will probably not have a public IP address, but instead get a NAT address by the firewall/SGW protecting the office LAN.
  • By encapsulating the IUB LAN traffic using IPsec and UDP encapsulation it will then be possible for BLAN functionality to traverse this kind of NAT gateway (RPC3948)
  • IP transport for the Iub will reduce cost for transmission significantly, in particular if a "best effort" links can be used most of the path from RNC to Node B. However, the same Node B's will probably simultaneously also carry circuit switched traffic as e.g. speech and/or video calls; Using "long distance" IP transport may in some IP networks (e.g. IP networks with heavy traffic load) be unsuitable for speech/video due to the delay and delay variations between RNC and Node B.
  • - circuit switched traffic e.g. speech, video
  • ATM links with reserved and guaranteed bandwidth
  • - packet switched user data e.g. TCP/IP, HTTP, FTP
  • Node B using IP networks Node B using IP networks.
  • each Node B need to have (at least) two physical communication ports: . - One (or more) port for BP connection, typically a Ethernet port, for connection to BLAN.
  • TGU One (or more) port for ATM connection, e.g. STM-I or El, for connection to an ATM backhaul (via ATM routers etc).
  • STM-I or El for connection to an ATM backhaul (via ATM routers etc).
  • the TGU will be responsible for splitting the data stream to/from the RNC between
  • IP based BLAN (primarily used for packet switched user data)
  • Iub control plane (NBAP, ALCAP etc) and Cub can be transported to/from Node B using either BLAN or ATM backhaul. (Operator may select)
  • TGU can use to choose which communication path (i.e. IP or ATM backhaul) it should use for different transport bearers (user data channels) between RNC and Node B: In some cases all communication over a particular ATM PVC is always of the same type (e.g. if RNC always places delay sensitive traffic like speech on the same ATM PVC), and then the TGU may be (semi-permanently) configured to
  • the TGU must choose for each AAL2 transport bearer (CID, part of a PVC) if to send it over the ATM backhaul (map it on other PVC to/from Node B) or if to translate it to IP traffic and send to Node B over BLAN.
  • CID AAL2 transport bearer
  • the TGU need to be dynamically configured which backhaul path (ATM or IP) to use to/from Node B for each AAL2 transport bearer (identified by its CBD). If this path selection information is included in (e.g.) the ALCAP signaling from RNC to Node B (e.g. added as proprietary addition to ALCAP messages), then the TGU could use these messages to configure its routing table. The TGU would also need to inform the Node B on which path to use for a particular transport bearer (IP or ATM/AAL2).
  • IP transport bearer
  • the TGU may use a combination of existing information in ALCAP messages to select path.
  • Another (probably better) option is that the Node B selects path based on information received from the RNC in NBAP messages (e.g. radio link setup request) and/or ALCAP messages (e.g. ALCAP Establish request);
  • NBAP messages e.g. radio link setup request
  • ALCAP messages e.g. ALCAP Establish request
  • the advantages with this method are that: - combining information from NBAP and ALCAP gives a better picture on what will be sent on a particular AAL2 transport bearer from the RNC (e.g. if it is to be used for an HSDPA channel).
  • Node B If Node B is responsible for selecting path then it needs to inform TGU which path to use for each transport bearer.
  • a Node B requires a high quality frequency reference for its radio transmitter and receiver, typically 50-100 ppb depending on class of base station.
  • a Node B When a Node B is connected to an ATM network using synchronous lines (e.g. El or STM-I) then the Node may derive its frequency reference from the clock of the transmission line. If a Node B is connected only via (e.g.) Ethernet backhaul (e.g. BLAN), then this backhaul cannot provide the required clock signal, and other methods will be needed to enable Node B to fulfill frequency accuracy requirements stated by 3GPP.
  • the general recommendation in 3GPP for network synchronization is to supply a traceable synchronization reference according ITU-T G811.
  • ITU-T G811 When Ethernet is introduced for Layer 1 interface there is no continues clock traceable to a primary reference clock. 3GPP do not specify how the frequency recovery is done in this case.
  • the proposed solution is to rely on a highly stable reference oscillator inside the Node B for frequency reference. Even for a reasonable cost it is possible equip each Node B with an internal reference oscillator having a guaranteed short term stability of better than 25 ppb, i.e. well within the 3gpp accuracy requirements of 0.1 ppm for a local area BS.
  • the Node B In order for the Node B to be able to compensate for the aging of its internal reference oscillator, the Node B needs to synchronize its internal clock to some external reference source.
  • This reference clock source is herein referred to as a "time server”.
  • the time server can be any existing NTP server in the network.
  • the Node B acquires a time reference either from some time server in the network - or - from the TGU internal frequency reference which is derived from the E1/T1/J1 or STM-I connecting the TGU with the RNC.
  • the quality of the synchronization over an IP network is very much depending on the delay variations (jitter) over the IP transport network.
  • a fixed delay is less of a problem, but a jitter (delay variation over time) may be difficult to separate from variations on the Node B internal clock.
  • the synchronization accuracy over an IP connection is highly dependent on link delays/variation and processing time. If the time server is located on internet the accuracy is expected to be 1 to 50 ms; Trying to remove this variation with a simple low-pass filter would require time constant of 2-4 weeks. Even with these variations NTP could be used to continuously evaluate the quality of the oscillator and perform slow adjustments to compensate for the aging of oscillator, as described in the following section.
  • jitter could be substantially less, in particular for local BLAN with its own Ethernet lines. If the IP network is using IP v6 then jitter could also be decreased by prioritizing timing messages.
  • the TGU itself needs to have a clock recovery function and this may be
  • - a network reference clock, in case of SDH or PDH backbone net, or - an extremely stable oscillator (free running or tracked to an internet NTP server).
  • the method to overcome this problem with jitter is to perform the synchronization quite often and to use statistics from all synchronization attempts to improve the quality of the synchronization.
  • the Node B will send a new synchronization request to the appointed time server (using standard messages specified for e.g. NTP or IEEE 1588) once every time a period T has lapsed since the last request, where T preferably is a constant period but may also be allowed or controlled to vary. If a NTP server is used a time server than these messages cannot be sent more frequent than once every 16 seconds to each NTP server.
  • Several NTP servers may be used by the same Node B to improve its synchronization characteristics.
  • NTP messages over a jittery IP network enabling it to in a few hours detect even a very small frequency drift (1-lOppb) between its internal clock and a reference time server.
  • Tests have shown that by applying these algorithms the NTP can successfully be used to continuously evaluate the quality of the Node B internal oscillator and perform slow adjustments to compensate for the aging of oscillator, even if variations are in the order of 50-100 ms.
  • Examples of usable algorithms and methods include "Time synchronization over networks using convex closures” by Jean-Marc Berthaud, IEEE Transactions on Networking, 8(2):265-277, April 2000.1, "Clock synchronization algorithms for network measurements” by L. Zhang, Z. Liu, and C. H. Xia in Proceedings of IEEE INFOCOM, June 2002, pp. 160—169, and "Estimation and removal of clock skew from network delay measurements” by S. B. Moon, P. Skelly, and D. Towsley in Proceedings of IEEE INFOCOM, volume 1, pages 227-234, March 1999.
  • the time may be located inside the TGU; A solution which is particularly appealing if the TGU is located relatively close to the Node B, e.g. when using a local "BLAN".
  • the time server is located inside the TGU; a commercially available standard time server could be used instead, e.g. an NTP server.
  • This separated time server may be locked to other primary synchronization sources as described by standards with synchronization hierarchies for time servers using NTP, IEEEl 588 etc.
  • the separated time server may also be implemented by using a GPS receiver, making it more independent of the jitter over the IP network, something particularly useful if the time server can be located close to the Node B units using it for synchronization reference.
  • the time server is connected to IP network, implying that it may be located wherever suitable (e.g. close to a window if a GPS receiver is used) and can e.g.
  • the TGU will be informed about all transport bearers being established, reconfigured and released. This can be done by TGU terminating the ALCAP connection for all Node B's connected to it. The TGU will then use the contents of the received ALCAP messages to
  • ATM transport bearers AAL2 CID
  • Admission control functions in TGU can be disabled either for ATM network or IP network or for both.
  • TGU Once TGU has selected UDP/IP port to use for the requested transport bearer then TGU sends a message to Node B to inform Node B on which UDP/IP it should use for a particular "binding ID".
  • TGU may implement a fixed mapping between Binding ID (BID) and UDP port to a particular Node B. 6.3.1.1 Admission control in TGU for ATM network side
  • the TGU can use the information received in messages in e.g. ALCAP to perform admission control for the ATM network. If the TGU has been configured (or in some other way been informed) about the maximum allowable bandwidth consumption per VP or per VC then TGU can compare the new request with e.g. either
  • the TGU may also continuously monitor the load (e.g. delays, buffer sizes, queues) on the ATM connections and use this to decide if to allow the new ATM transport bearer to be created and started.
  • load e.g. delays, buffer sizes, queues
  • TGU Admission control in TGU for IP network side Similar as for the ATM side (above) the TGU may also perform admission control for the IP network side. If the TGU has been configured (or in some other way informed) about the maximum allowed bandwidth consumption on the IP network, then TGU can compare new request for additional bandwidth (the new transport bearer) with e.g. either - the sum of current consumption (measured or estimated) of bandwidth to/from the TGU on the IP network interconnect, or
  • TGU Since transport bearer allocation request for ATM network only states requested bandwidth on ATM side, the TGU will need to recalculate bandwidth requirement to take into consideration the different overhead in ATM and IP networks.
  • the TGU can also implement an admission control for preventing overload/congestion of the IP network; In such case the TGU may deny an transport bearer setup because TGU and/or Node B suspects that IP network is overloaded/congested in some point (could be some router in between TGU and Node B and need not necessarily by the access point of Node B or TGU). For further details refer to the section for "handling of congestion in IP network”.
  • Node B terminates ALCAP and then sends a message to TGU to setup the transport bearer and request TGU to create a mapping between a certain CID (i.e. transport bearer) and UDP port.
  • the Node B can (if needed/requested by operator) implement admission control both for ATM connections (connections via TGU) and IP network.
  • admission control both for ATM connections (connections via TGU) and IP network.
  • Admission control of ATM interconnect can be implemented in Node B but will require that Node B has been configured with information about allowed capacity of VP and VC used in the TGU. In a further improvement the Node B can also receive measurements collected by TGU for the ATM interconnect and use this information for its admission control procedures.
  • Admission control of the IP interconnect (e.g. to avoid IP network overload) can be implemented in the Node B using the same procedures as described in previous chapter.
  • TGU can do any admission control (i.e. checking if requested transmission bandwidth on ATM and/or IP network is available).
  • the VPI corresponds to a VP directed to a certain Node B, i.e. the connection between VPI and address in the IP network need to be configured into the TGU.
  • the Node B address in the IP network may either be a fixed IP address or an address assigned via DHCP; In the latter case the TGU can find the IP address using DNS.
  • mapping between VCI-CID and UDP port can be implemented as a simple mathematical formula hard coded into the software in TGU and Node B.
  • the Node B can implement admission control using the procedures described in previous sections.
  • the prioritization implies that the transmitting node (TGU or Node B) need to set a proper value in the "type of service field" (DSCP) in each IP header according to the priority selected for this particular IP packet.
  • TGU transmitting node
  • DSCP type of service field
  • the TGU and/or Node B could also include required policing and shaping functions required by the DiffServ cloud (i.e. the IP network protected by the DiffServ), in which case the node doing this need to be configured with the "service contract" Similar procedures can be used for other type of prioritization schemes, e.g. when implementing BLAN over MPLS networks instead of pure IP networks 6.4.3 Priority for control information to/from Node B
  • Data flows containing traffic control information typically should be given a high priority on the IP network; The reason for this is that delayed or lost messages may cause time outs on higher layers (RRC etc), dropped calls etc.
  • Data flow containing operation and maintenance information (O&M) could typically be given a lower priority on the IP network; The reason for this is that most of these flows are not real time critical (e.g. software downloads) and that the communication either is protected by retransmission protocols as TCP and FTP or can be protected on application layer, e.g. by originating node resending a request message if no reply was received within a defined time.
  • the main method used for creating prioritization for user data flows is to assign each "AAL2 transport bearer" (e.g. a bearer assigned to a particular DCH or set of coordinated DCHs) to a VCC (where the VCC is identified by its VPI and VCI) with a given ATM service class.
  • AAL2 transport bearer e.g. a bearer assigned to a particular DCH or set of coordinated DCHs
  • VCC where the VCC is identified by its VPI and VCI
  • Different networks supports different number and types of ATM service classes; but typically an ATM network supports e.g.
  • Rt-VBR real-time variable bit rate
  • Nrt-VBR non-real-time VBR
  • Each of these types correspond to a priority level defined by the network; Type of priority and handling of priority differs between different network implementations.
  • the TGU knows which VCC received the data from the RNC, and can therefore use any information about ATM service class of the VCC to assign a priority for the IP network.
  • the Node B For uplink data stream the Node B need to know the ATM service class of the VCC which the TGU will map the particular user data on. Node B can achieve this information in a number of different ways: - if Node B terminates ALCAP then the ALCAP message itself informs Node B on which VCC the transport bearer will be assigned. In that case if Node B also knows the ATM service class of that VCC then it can assign IP network priority according to this. - If Node B gets information about assigned VCC in some other way, either from RNC (via e.g. NBAP) or from TGU (via some message originating from TGU) then if Node B knows ATM service class for the VCC then it can assign IP network priority according to this.
  • RNC via e.g. NBAP
  • TGU via some message originating from TGU
  • Node B gets downlink userplane data from TGU this data is marked with a priority, and Node B can simply use the same priority for the uplink information associated with the same AAL2 transport bearer, i.e. data mapped to the same UDP port.
  • ATM service classes for VCC cannot be used for prioritization, e.g. because the ATM network or RNC implementation does not use this feature.
  • IP network priority could selected using other means, e.g.:
  • the TGU and/or Node B could calculate IP network Priority from information received in ALCAP
  • Node B and TGU can select priority level depending on knowledge about the type of data that will be transported on that particular transport bearer, e.g.:
  • the Node B/TGU may use other information received from RNC to also differentiate priorities between different type of dedicated channels, e.g. assigning higher priority to "speech calls" and/or other type of circuit switched services; Node B can deduce type of end-user server by looking at detailed parameters for radio access bearer RAB when RNC configures/reconfigures the radio link, e.g. number and type of transport channels, transport formats for transport channels and the ToAws-ToAwe (i.e. timing window for Node B reception of downlink userplane data on Iub from RNC). In particular the timing window gives Node B a very good hint about the priority and timing constraints RNC wants to assign a particular transport bearer.
  • priority levels may be assigned either hard coded in the software in TGU and Node B or defined by the operator as part of the configuration of the Node B and/or TGU.
  • This predefined priority level could be an absolute level, or some kind of offset related to other types of traffic.
  • TGU will be interfaced with ATM towards RNC and with IP network towards one or a number of Node Bs. Between the TGU and Node Bs there will be several IP routers that handle the traffic between TGU and several Node B. The same IP network may also be handling other type IP traffic, e.g. if the network is a WAN/LAN used for public internet. Typically, routers in IP networks respond to congestion (overload) by delaying and/or dropping datagrams.
  • DiffServ For IP networks standard solutions exist for handling priority between different kind of IP traffic, e.g. DiffServ (RFC2475 etc.); However, these are not always used. And even if used they cannot always solve the problem with overload/congestion.
  • IP packets will be delayed/dropped in a random fashion and neither Node B nor TGU/RNC will be informed about this, but only see the effects.
  • TGU and/or RNC plus Node B can get an early warning of a potential congestion situation and take action to decrease the traffic before service quality degrades to much; If the RNC/TGU and/or Node B manage to reduce traffic on the IP network in a controlled way then a disaster situation can be avoided and the IP network can faster recover from the congestion, and not being overloaded by e.g. retransmissions. If all (or at least the critical) routers in the network monitor and report the load situation to some central management element, then TGU and/or Node B may be informed about this in order to take necessary action to reduce their load on the IP network.
  • both TGU and Node B need a method for
  • both methods should be used simultaneously, thus making the TGU-Node B IP Iub supervision independent of the congestion policies used by routers in the IP network, i.e. if they mainly drop or mainly delay traffic.
  • TGU sends a message to Node B and just before sending it to the IP network the TGU stamps the message with current reading of its internal clock.
  • each sender and receiver needs to continuously count number of block sent and received. With some periodicity, e.g. once every 5 seconds, TGU should send a status message to Node B telling it how many IP packets was sent since last status message and Node B compares that with number of IP packet received during the same period. The difference between the counter of packet sent from TGU and received in Node B gives the Node B information about the number of IP packets dropped/lost by the IP network.
  • the same procedure should also be used for the uplink direction, i.e. the Node B counting number of IP packet sent and TGU counting number of IP packets received.
  • the counters exchanged could of course also be "accumulated number of IP packets sent and received” thus removing the problem with “sampling periods" not being identical in sending and receiving node.
  • Another option for implementing these measurements is to not map 3GPP frame protocol frames (user data frames) directly on top of UDP/IP as defined by 3GPP, but instead also add an extra header containing a counter and a timestamp.
  • the TGU would need to add this extra header on Frame protocol frames in downlink and Node B to check them.
  • the Node B would add the extra header, TGU would check them and remove the extra header before transmitting the messages on the ATM network.
  • the Node B shall check that CFN of a particular message falls within a given capture window ToAWS - ToAWE as defined in TS25.402. Any variations in this could be used by Node B to detect if delay is increasing in the network from RNC to Node B. In the same way the RNC can detect an increasing delay.
  • this method is difficult to use in TGU because then TGU would need to know the relation between SFN and CFN for each transport bearer (something could be solved by Node B sending a message to TGU about this).
  • TGU would need to know the relation between SFN and CFN for each transport bearer (something could be solved by Node B sending a message to TGU about this).
  • the measurement depends on Node B and RNC actually transmitting the data at a constant offset to SFN/CFN, i.e. any transmit timing variations caused by load inside RNC and/or Node B could be misinterpreted as delay variations on the IP network. However, this node internal delay is most probably rather small compared to the delays of the IP network.
  • CFN/SFN stamping of 3GPP Frame protocol frames could be used for implementing the measurements needed, but it is much simpler to get the information by introducing completely new and dedicated messages between TGU and Node B as described with the preferred method above.
  • the preferred implementation is that Node B and TGU by sending messages over the IP network periodically exchange information such that both TGU and Node B keeps statistics of IP network delay, delay variation and lost IP packet for both uplink and downlink.
  • both TGU and Node B have the same kind of information then both nodes can take actions immediately if a suspected overload/congestion situation is detected. If both delay and lost IP packets are measured over the IP network then each transmitting node (i.e. TGU for downlink and Node B for uplink) should:
  • the node responsible for transmitting in the degraded direction shall immediately take action to resolve the situation, as described below.
  • the supervision shall use at least two thresholds for the supervision:
  • - a warning level indicating that at least something should be done to prevent the situation from getting worse
  • - a critical level indicating that the load on the IP network must be decreased significantly immediately.
  • FP frames frame protocol
  • the data between TGU and Node B will be separated on different priority levels on the IP network (e.g. using DiffServ to priorities). If different priority levels is used on the IP network between TGU and Node B then supervision for congestion/overload should be done separately for each priority level, thus making it possible to detect congestion e.g. only affecting "low priority traffic" (e.g. user data for the packet switched (PS) end-user services). In such case:
  • low priority traffic e.g. user data for the packet switched (PS) end-user services
  • TGU and Node B the nodes (TGU and Node B) needs to have separate counters of sent and received IP packets per priority level. - Message between TGU and Node B for measuring delay of the IP network needs to be sent on each IP priority level
  • Measuring delay and/or lost IP packets per priority level also makes it possible for TGU and/or Node B to implement a congestion/overload warning with different thresholds for different type of traffic, e.g.: - tolerating worse IP network behavior for PS userplane than for e.g. high priority circuit switched (CS) service like speech,
  • CS circuit switched
  • Node B and/or TGU reduce the amount of data the node transmits.
  • the first step in a suspected congestion/overload situation is that the transmitting node selects some FP frames which are discarded and not sent on to the IP network. This must be done by Node B for uplink data and TGU for downlink data; The main advantage with this is that if congestion/overload is only present in one direction of the IP network (e.g. from TGU to Node B) then the other direction is unharmed.
  • the transmitting node selects FP frames to drop from the transport bearers with assigned lowest priority, i.e. transport bearers which will be carried on ATM connections with lower service class. No frames should be dropped from UDP ports dedicated for Control plane information, e.g. NBAP and ALCAP.
  • the RNC is not aware of the overload/congestion supervision performed by Node B and TGU then either Node B or TGU or both need to decide and take action.
  • the preferred implementation is that the decision about dropping transport bearers and/or complete RL/RLS is taken by Node B .
  • the Node B needs to selects which dedicated radio link radio link sets (RLS), and/or HSDPA data flow dedicated to a certain mobile should be dropped. Decision on what to drop can be based on
  • Node B may select to drop e.g. any RLS or HSDPA data flow.
  • the preferred method for dropping RLS / HSDPA data flows is that Node B sends an NBAP message (e.g. NBAP message Radio link failure or Error indication) with proper cause value to the RNC. After this the RNC should as soon as possible remove the RLS including all transport bearers.
  • NBAP message e.g. NBAP message Radio link failure or Error indication
  • Node B could also send a message directly to TGU asking the TGU to stop transferring data corresponding the transport bearers of the RLS; However, this is in most cases probably not needed.
  • the TGU could also autonomously decide to drop a number of downlink data connection, i.e. discarding all downlink data for those transport bearers.
  • admission control is a way for a node to deny a request for new or modified bandwidth, e.g. when RNC tries to set up a new transport bearer and/or modify reserved bandwidth for an existing. Where admission control is implemented, this should be used also to combat overload/congestion by denying new services (e.g. new calls) to be setup/increased if the IP network is already under stress and close to overload.
  • new services e.g. new calls
  • the admission control in this case can be performed by e.g.
  • the Node B can use the NBAP message radio link failure to indicate to
  • RNC has implemented NBAP message ERROR INDICATION (not implemented by all RNC) then instead this can be used from Node B to RNC to inform RNC that radio link set needs to be dropped due to problems on IP network.
  • TGU may inform the RNC about a problem on IP network by issuing AIS or RDI on or more of the ATM VP or VC.
  • FP packets transferred over Iub are relatively small, typically less than 45 bytes. Instead the frequency is rather high, e.g. for each AMR speech call is one FP packet transferred each 20ms. For a single cell carrier Node B, maximum number of simultaneous speech calls is about 100, which gives a total rate of FP frames in excess of 4kHz. It is not obvious that this kind of solutions works well for an IP based network, where typically the IP packets are larger (Max MTU in the order of 1500bytes) and less frequent. For cases where the routing capacity (number of IP packets routed per second) of the IP network is limiting it would be much better if the end-points (in our case TGU and Node B), reduced the amount of packets and instead made each packet bigger.
  • the TGU and Node B maps one FP message (containing e.g. one AMR speech frame) onto one UDP-IP packet for IP transport network.
  • This method makes the transform between ATM and IP easy, but at the cost of unnecessary high frequency of small IP packets on the IP network. However, this is the preferred method since this is what is recommended by 3GPP.
  • the TGU and Node B may also pack several FP messages into the same UDP-IP packet.
  • the TGU For packing of FP messages in downlink the TGU has a small internal buffer with a size corresponding to max MTU of the IP network.
  • TGU and/or Node B should be configured with MAX MTU of the IP network in order to assure that the transmitting node does not generate IP packets longer than MAX MTU. All FP messages incoming to TGU from the ATM network will be added to this buffer in the same sequence as they arrive from the ATM network.
  • the downlink buffer in TGU is sent as a message over the IP network to Node B as soon as either - the first message in the current buffer has been stored in the buffer more than an allowed maximum delay time, typically ⁇ 5 ms, or
  • the Node B When receiving a packed message from the IP network, the Node B can then unpack the message and extract the individual FP messages. For uplink the Node B performs the same packing process, with the difference that in this case the FP messages has been produced by uplink signal processing inside the Node B. If priority is used in the IP network, e.g. using DiffServ, then TGU and Node B should implement a separate buffering and packing for each priority level.
  • TGU transmission gateway unit
  • the TGU will be configured with all data needed for the termination of ATM PVCs and information on how data shall be transformed into the IP network.
  • the TGU needs to be configured with at least: - the address of the Node B in the IP network, could be a fixed IP address or a logical name which can be used for lookup in using DNS, and - detailed data for all ATM connections (VP-VC) intended for this Node B;
  • This data includes e.g. ATM service class, parameters for the VC etc
  • the configuration data for ATM parameters are sent to the Node B (as had the node been connected via ATM).
  • the Node B receives this data it determines the IP address of the TGU it has been assigned as interface to ATM worlds, and then sends the configuration parameters for ATM to the TGU.
  • the same idea can be used for fault management and performance management, i.e. the Node B collects from the TGU data related to the Node B's interface to ATM world; Then the Node B reports this information to the central O&M system, making the TGU virtually invisible (but still managed) in the O&M network.
  • a further advantage of this method is that since the Node B holds all data, then if a TGU fails (or Node B fails to contact a particular TGU) then the Node B can instead try to establish contact with another TGU (a hot standby). When the Node B has sent all configuration to that TGU then the only remaining to action for a switch over would be to change the ATM network switching such that the VP 's are switched to this new TGU.
  • IP network used for communication between TGU and Node B is in someway accessible for the public, then some kind of protection will be need to prevent intrusion and disturbance of the operation of the nodes.
  • the best way to achieve this would be to put all the IP communication between TGU and Node B on a VPN connection, preferably protected by IPsec or similar. However, this may be an overkill and instead the protection may be depend on:
  • the traffic control information between RNC and Node B (NBAP, ALCAP etc) and between TGU and Node B (if and where used) are normally not protected other by the fact that they are completely binary and in a uncommon format. For increased protection these particular data flows may be encrypted between TGU and Node B.
  • MD5 algorithms scrambling the bits transferred.
  • the O&M information could preferably be encrypted using IPsec.
  • the TGU and Node B may implement a firewall using e.g. an IP address filter protecting the nodes from malicious traffic.
  • An optional solution would be to put a security gateway either in front of the TGU or incorporate this into the TGU. This security gateway function could then terminate VPN tunnels from the Node B, i.e. the node B terminates the other end of the tunnel.
  • VPN tunnels e.g. IPsec in ESP tunnel mode could used both for control plane (NBAP, ALCAP and O&M).
  • NBAP control plane
  • ALCAP ALCAP
  • O&M control plane
  • For Userplane (corresponding to the AAL2 transport bearers) transport over the IP network the frame protocol messages could also be transported over a tunnel, either with encryption (e.g. IPsec ESP) or with null-capsulation, i.e. without additional encryption for the IP network.
  • OAM VCC configured for each Node B in the TGU. All AAL5 frames received on OAM VCC are encapsulated in IP/UDP frames (IPoATM over UDP/IP) by the TGU and sent to Node B's. This is illustrated in Figs 15 and 16. In this option Node B both OAM and UP IP address can be configured with
  • DHCP DHCP
  • the TGU may act as a DHCP server if needed but this requires DHCP relay agents (one for each hop) in the IP network between TGU and Node B.
  • DHCP relay agents one for each hop
  • OAM IP address the "normal" DHCP server in OAM network may be used. The precondition is that the lower IP address (UP IP) is already configured since it will be used to carry all packets to the TGU.
  • the Node B will reply to Inverse ATM ARP requests sent on the VCC. Since IP over ATM is encapsulated over UDP/IP it may be required to decrease the MTU because of the extra header (28 bytes).
  • the Node B only has 1 IP address (UP IP) but from the OAM network it looks like 2 addresses since all IP packets to the TGU are forwarded to the Node B.
  • UP IP IP address
  • Node B both OAM and UP IP address can be configured with DHCP.
  • the TGU may act as a DHCP server if needed but this requires DHCP relay agents (one for each hop) in the IP network between TGU and Node B.
  • DHCP For the OAM IP (TGU IP) address the "normal" DHCP server in OAM network may be used. In this option the TGU will act as a DHCP client to configure the address. The Node B does not have to know this IP address.
  • TGU as a DHCP server for both user plane IP addresses and OAM IP addresses.
  • the TGU is will still answer to InvARP with the OAM IP address provided to each Node B..
  • the user plane transport over Iub as specified by 3GPP is mainly/originally intended for transport over ATM networks.
  • ATM networks are mostly designed to be able to provide a guaranteed quality of service in terms of loss of frames and timeliness in delivery, i.e. the jitter in transport delay is generally assumed to be rather low (in the orders of a few ms for prioritized bearers).
  • Seen from Node B a jitter in time of arrival of userplane data from RNC can be caused of either RNC not sending the data with correct timing (e.g. due varying processing and routing load inside the RNC) or because time varying delay in transport network between RNC and Node B.
  • the same applies for uplink data, i.e. the transmit time from may vary due to internal load of the Node B and also delay over transport network may vary.
  • TS25.402 In order to cope with this 3GPP does in TS25.402 (and in TS25.427 and TS25.435) specifies that the Node B shall have "time of arrival window" for capturing downlink userplane FP frames received on Iub where each userplane FP frame is clearly marked with the CFN (connection frame number) or SFN (system frame number) when that particular frame should result in downlink data transmitted over the air interface (Uu).
  • This "time of arrival window” is given by the RNC to Node B as ToAWS and ToAWE for each transport bearer (TS25.402), i.e. each AAL2 transport bearer (when ATM used as backhaul to the Node B) carrying data for a transport channel or group of coordinated transport channels.
  • This time of arrival window for userplane can also be used for handling of jitter of an IP based interconnect to the Node B;
  • RNC types does for some services try to reduce downlink delay by sending userplane data as late as possible, i.e. the RNC tries to send downlink FP frames with such timing that they arrive as close as LTOA as possible (see TS25.402 section 7.2), giving very little space for jitter.
  • the method for RNC to know time of arrival in Node B is to use the FP messages "UL synchronization" and "DL synchronization specified in TS25.427 and
  • Time of arrival windows configured by RNC may be possible to adjust for a certain implementation of the transport network, but this is an implementation choice done by RNC designer/vendor.
  • ToAWS - ToAWE For some RNC implementations it may be possible to adjust the settings of ToAWS - ToAWE per Node B. In other RNC a changed value will be used for all Node B' s connected to the same RNC, which may cause problem if an RNC mixes Node Bs connected directly by ATM and Node B connected over IP via a TGU. In the latter case it may be necessary for the Node B to on its own increase the ToAWS - ToAWE setting received from RNC. This procedure can be useful even if the RNC can have its ToAWS- ToAWE settings adjusted to match the behaviour of the IP network being used. The actual values used by Node B could in this way be adaptable, i.e. Node B uses statistics (or other information) about the expected jitter of the network in order to decide size and position of the window. The Node B may even adjust the window during operation to match the current behaviour of the IP network.
  • Node B uses statistics to move-resize the time of arrival window, then this statistics calculation should be done per priority level on the IP network.
  • this kind of timing data collected by the Node B may be reported back to the RNC thus giving the RNC the possibility to already during set up of new channel select a suitable time of arrival window for the associated transport bearer (carried over IP or ATM) .
  • the reporting mechanisms in such a case could also be implemented via an existing "performance management"/"performance counters" reporting system where Node B reports collected data to e.g. an OMC (operation and maintenance center) supervising the network.
  • the RNC should have some kind of jitter buffers in uplink.
  • 3GPP does not state any particular requirements regarding the implementation of those, and hence the implementation will be different between different manufacturers.
  • the time of arrival windows in RNC released today are most probably designed and optimized for ATM connect to the Node Bs.
  • the window size may be possible to adjust by e.g. configuration data. But in some implementations windows may be hard coded in the RNC and not possible to tweak for certain implementation of the network.
  • uplink userplane data from all or some of the Node B connected to RNC is transported partly over an IP network, then it may be necessary to modify the time of arrival windows used by the RNC.
  • uplink userplane FP frames may be lost due to incorrect time of arrival in RNC. If the windows in the RNC cannot be adjusted to match the requirements imposed by the IP transport network, then the TGU can implement a "jitter buffer" making it possible to give a better and more stable timing of uplink data towards the RNC.
  • ATM transport can be performed on different type of media, on example is STM-I, another example is using a single El line, yet another example is to use multiple El lines either with or without IMA between the lines.
  • any protocol used for transport of ATM shall be regarded as examples usable in different embodiment. Same applies to the IP transport where most pictures indicate that IP is transported over Ethernet, but of course other type of media can be used for transport of the IP traffic e.g. Gigabit Ethernet, Wireless Lan, WiMax etc.
  • the TGU performs forwarding of AAL5 frames to UDP frames.
  • AAL2 signaling protocol (ALCAP)
  • An RNC using R99/R4 will use ALCAP to control transport bearer allocation etc.
  • the ALCAP will be placed on a dedicated AAL5 PVC using SSCOP 5 as is also illustrated by the AAL2 signaling (ALCAP) in Fig. 7:
  • the TGU performs forwarding of AAL5 frames to UDP frames.
  • AAL2 signaling ALCAP
  • Control plane routing is preferably performed as described in chapter 7.5.
  • the TGU may terminate ALCAP from RNC, also illustrated by the ALCAP-IP-ALCAP, control plane signaling of Fig. 8.
  • ALCAP-IP-ALCAP ALCAP-IP-ALCAP
  • TGU For the ATM transport control signaling on the network side TGU shall support ALCAP [ITU Q2630.2].
  • the TGU shall support IP-ALCAP [ITU Q2631.1].
  • IP-ALCAP The inter-work between IP-ALCAP and ALCAP shall be done according to [ITU Q2632.1]
  • Control plane routing is preferably performed as described in chapter 7.5.
  • TGU - NBU Interwork For dynamic establishment and release of inter-working connection for user data a new proprietary TGU inter- working signaling protocol (TISP) will be used.
  • the TGU shall act as the server for this protocol and NODE B as a client.
  • the server port for this shall be configurable.
  • the protocol can be used on either UDP or TCP.
  • Userplane signaling is described in chapter 7.6 and 7.7.
  • TGU-NBU Interwork may be based on IP-ALCAP as described in section 7.2.2.
  • TGU-NBU interwork can be omitted in some implementations, in particular if TGU implements a static mapping of VPI-VCI-CID versus IP address- UDP port. Mapping of VPI vs. IP address will then require that:
  • TGU can lookup the IP address of the Node B using DNS, or
  • Node B at start up registers its IP address in the TGU it has been assigned via e.g. configuration data stored in the unit.
  • the client will use an "inter-working setup request" message to request establishment of a new inter-working connection, i.e. connecting a AAL2 transport bearer from ATM backhaul with IP-UDP transport bearer (UDP port) on BLAN. Supplied in the request will be PHY, VPI, VCI, CID, transaction id, downlink UDP port and downlink IP address. Uplink parameters are set to zero.
  • the setup request should also include information on:
  • TGU server
  • the server can accept the new connection then it will allocate an uplink UDP port and IP address and create a inter-working connection between AAL2 SSSAR CID and UDP/IP.
  • the TGU will respond with a "inter-working setup acknowledge" message.
  • the message will include the allocated downlink parameters and the parameters supplied in the request. If for some reason an inter-working can not be established the TGU shall respond with an "inter- working setup reject" message.
  • the message will include a fault code value and the parameters supplied in the request. Examples for reasons on reject:
  • the client will use the "inter-working release request" message to request release of an established inter-working connection. Supplied in the request will be PHY, VPI, VCI, CID, transaction id, uplink/downlink UDP port and uplink/downlink IP address.
  • the server (TGU) will release the connection between AAL2 SSSAR CID and UDP/IP. If operation is successful the TGU will respond with a "inter-working release acknowledge" message. The message the parameters supplied in the request.
  • the TGU shall respond with an "inter- working release reject" message.
  • the message will include a fault code value and the parameters supplied in the request.
  • the client (NBU) will use the "inter-working reset request” message to request release of all established inter- working connection.
  • the server (TGU) will release all connection between AAL2 SSSAR CID and UDP/IP. If operation is successful the TGU will respond with a "inter- working reset acknowledge" message.
  • the TGU shall respond with an "inter-working reset reject" message.
  • the message will include a fault code value.
  • the Node B's connected to BLAN may need to monitor operation of TGU, in particular if RNC is not aware about presence of the TGU in path between RNC and each Node B. For this reason will the proprietary Interwork between each Node B and TGU enable the Node B to:
  • NBU O&M signaling An embodiment for NBU O&M signaling is illustrated in Fig. 9.
  • the NBU O&M signaling interface is transported over IPoA, one channel per NBU.
  • the TGU shall forward between this interface and IP/Ethernet.
  • IPoA shall be implemented according to RFC1483 (LLC/SNAP encapsulation) and RFC1577 (Classical IP and ARP over ATM).
  • RFC1483 LLC/SNAP encapsulation
  • RFC1577 Classical IP and ARP over ATM
  • the MTU for IPoA should be configured to avoid fragmentation between the ATM and Ethernet interface. Control plane routing is preferably performed as described in chapter 7.5.
  • TGU O&M signaling (T ub ) An embodiment for external O&M signaling is illustrated in Fig. 10. The
  • TGU remote O&M signaling interface is transported over IPoA.
  • the TGU shall terminate this interface.
  • SNMP and FTP shall be supported.
  • Control plane routing For control plane signals the routing between the network side and BLAN shall be based on VP/VC on the network side and IP address + port number on the BLAN side. Protocol conversion shall be performed in the TGU; conversion type shall be remotely configurable for each routed channel. The configuration shall be stored in persistent memory.
  • Fig. 11 illustrates an embodiment of control plane routing, in an example with two control plane channels (CHl and CH2).
  • the TGU shall map the Frame Protocol between AAL2 SSSAR frames and UDP packets.
  • User plane routing shall be performed as described in chapter 7.7.
  • Fig. 12 illustrates an embodiment of user plane signaling, where FP equals what is specified by 3GPP in specifications TS25.427 and TS25.435. 7.7 User plane routing
  • the routing between the network side and BLAN shall be based on VP/VC+CID on the network side and IP address + UDP port on the BLAN side. Routing of individual data channels shall be dynamically configured with the proprietary protocol TISP.
  • Fig. 13 illustrates an embodiment of user plane routing, in an example with three user plane channels (CHl, CH2, CH3).
  • NBU OneBASE Pico Node B Unit
  • NBU OneBASE Pico Node B unit
  • NBU 3GPP Node B supporting traffic, base band and radio for one UMTS FDD carrier and one cell.
  • Basic functions, performance and layout of the OneBASE Node B is outlined in WO2005/094102, the content of which incorporated herein by reference.
  • In the roadmap for Andrew development is also a Micro Node B unit with up to two carriers; This would be based on the same source system (HW, FW and SW) and solutions for Pico Node B is transferable directly to the Micro Node B.
  • the transmission board se also the referenced document WO2005/094102
  • Each NBU also has a Ethernet port (10/100 BaseT) which currently only is used for local on- site maintenance.
  • Current version of OneBASE and its application software is designed for ATM based backhauls and transmission boards for 2xEl (with IMA) and STM-I are available today.
  • the existing NBU will be able to support IP transport using Ethernet on existing hardware. It would also be possible to design software allowing to mix IP transport and ATM transport on the same unit. The following S W/FW modifications would be needed for first release:
  • xDSL communication could be handled using an external xDSL modem connected to the Ethernet port of the OneBASE Pico Node B unit.
  • an external xDSL modem connected to the Ethernet port of the OneBASE Pico Node B unit.
  • TGU Transmission Gateway Unit
  • TGU Transmission Gateway Unit
  • the OneBASE TGU will act as an IP - ATM inter- working unit. It will enable a Node B using the IP transport option according to 3GPP release 5 to communicate with a RNC only supporting the ATM transport option.
  • the TGU will also act as a converter between physical interfaces, providing conversion between ATM traffic over E1/J1/T1/STM1 to/from the RNC to IP traffic over Ethernet to/from Node B.
  • a proprietary protocol will be used for exchanging routing information etc between TGU and Node B/Node Bs.
  • IP-ALCAP IP-ALCAP
  • ITU Q2631.1 IP-ALCAP
  • ITU Q2631.1 IP-ALCAP
  • the interwork between ALCAP and IP-ALCAP will be done according to ITU specification Q2632.1. Since the control of transport bearers between the RNC and several Node Bs could be performed by the TGU, it might also be possible to concentrate traffic in an "intelligent" way thus saving transmission cost for the interface between the RNC and a remotely located TGU. This particularly important if transmission cost between RNC and TGU is significant (e.g. a remote star configuration)
  • the TGU should have a persistent memory for storage of application programs and configuration data.
  • Performance management for collecting traffic statistics as e.g. load on links both on BLAN and ATM side
  • the TGU should continuously monitor the actual load on the ATM backhaul both:
  • AAL2 transport bearers AAL2 transport bearers
  • PVC to compare with configured max peak cell rate PCR etc
  • TGU should continuously monitor load also per priority level and/or prioritized item.
  • TGU Before accepting a setup request (from a Node B or RNC) of a new transport bearer the TGU must check that requested bandwidth up and downlink is available on the ATM backhaul connection to the TGU, i.e. that "current load” + “new request” ⁇ "max allowed load", where "current load” could be be calculated as e.g.
  • the "current load” and/or “max allowed load” can be defined as either:
  • admission control for new transport bearers also applies when reconfigurating a transport bearer, e.g. when RNC request the reserved bandwidth should be increased for a particular transport bearer.
  • admission control can be completely or partly disabled, e.g. when TGU is located close to the RNC with virtually unlimited bandwidth at no cost.
  • the TGU can be equipped with a time server using a very stable high quality reference oscillator inside the TGU.
  • the Node B in BLAN can then use this time server in a similar way as a NTP server on internet, but since this server is located on the BLAN the jitter in the IP network (i.e. BLAN) will be significantly smaller than for an NTP server on the public internet.
  • Node B Using IP transport for Iub connection of base stations (Node B) reduces the cost for the transmission backhaul significantly. The trend is also that base stations becomes smaller and smaller making it simpler to find suitable sites and to deploy them.
  • RNCs are not designed to cope with this large amount of separate single cell base stations; Instead they are designed to handle fewer but large base stations where each base station has a few control ports terminating NBAP but handles a number a number of sectors each with a number of RP carriers, e.g. a 6 sector x 3 carrier.
  • the TGU can therefore modified to also perform an aggregation of Iub, making a number of single cell carrier Node B to be regarded by the RNC as one larger several sector Node B.
  • TGU will need at least to terminate the common NBAP (C- NBAP, also called “Node B control port”) and ALCAP (if used).
  • C- NBAP also called "Node B control port”
  • ALCAP if used
  • the dedicated NBAP (D-NBAP, also called Communication control port) may either be terminated on the TGU, or may be forwarded to the Node B unit handling a particular radio link.
  • D-NBAP also called Communication control port
  • the problem with such a solution would be that if a radio link (DPCH) moves from one Node B unit to another (a handover) then control of the radio link should also be moved.
  • DPCH radio link
  • 3GPP has foreseen this kind of problem and procedures for this move of communication control port are already defined in the standard.
  • the TGU can then use these already defined procedures to change communication control port (i.e. switching control flow to another Node B for a particular Ue)
  • the TGU needs to decide which Node B handles that particular cell and then forward a radio link setup message to that cell. Either this forwarded message can be an identical copy of original message or it can be some kind of proprietary message.
  • the TGU would only terminate the C-NBAP and forward all other control and user plane signaling directly to Node B units handling the connection at that particular moment time. In such case the routing inside the TGU will need to route different CID to/from the same VPI-VCI to different Node B, i.e. different IP addresses.
  • the TGU may also perform so called “softer handover", i.e.
  • TGU needs to receive uplink data for the same Ue connections from several Node B and the combine this flows (e.g. by selection combining on FP packet level) to create one uplink flow to RNC for each Ue.
  • the TGU can in this way also emulate more than one "several sector Node B", i.e. terminating C- NBAP and ATM etc for more than one "cluster of Node B units".
  • the RNC via TGU would be seeing one "several sector Node B" per cluster.
  • the above described concept of a "TGU" acting as an Iub aggregator displaying one or more "several sector Node B” instead of a cluster of Node B units can also prove to be very useful even when the RNC itself can terminate Iub over IP. In such an implementation the TGU would have IP interface both to RNC and to the Node B's.
  • both the TGU and the NBU include computer systems comprising microprocessors with associated memory space and operation software, as well as application software configured such that execution of computer code of the application software causes the computer systems to carry out the steps mentioned herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a new transmission solution enabling deployment of pico Node Bs with a significantly lower operational expense for transmission than current ATM based networks, and relates to a communications system comprising a Radio Network Controller (RNC) connected in an (Asynchronous Transfer Mode (ATM) network; a Node B unit (NBU) radio base station connected in an IP network; and a Transmission Gateway Unit (TGU) connected to the IP network and to the ATM network, configured to change transport bearer of data packets transmitted between the RNC and the NBU. The proposed solution will be possible to use with existing ATM based RNC and backhauls, using IP based transport for (at least) the last mile, and will also enable the use of IP transport for almost complete path, reducing cost for the more expensive transport to a minimum.

Description

TRANSMISSION GATEWAY UNIT FOR PICO NODE B
1 Field of the invention
The present invention relates to new transmission solutions enabling deployment of radio base stations in a UMTS network with significantly lower operational expense for transmission than current ATM-based networks. The proposed solution is possible to use with existing ATM-based Radio Network Controller (RNC) and backhauls, using IP based transport for at least the last mile. Furthermore, the proposed solution will also enable the use of IP transport for almost complete path, reducing cost for the more expensive transport to a minimum.
2 Background
Currently most, if not all, RAN implementations depend on ATM for communication between radio network controller RNC (and other centralized functions) and the base station, also referred to as Node B. With these solutions bandwidth can be reserved for each connection thus guaranteeing QoS for the end- user. The synchronous physical transmission (e.g. El or STM-I) used for ATM networks also provides a good and traceable reference for Node B reference frequency clock which is needed for the radio transmitter and receiver of the Node B. However, ATM-based backhaul with its reserved bandwidth can be expensive. Today the lease of transmission lines to base stations in telecommunication system networks is a major operating cost for many mobile operators. For a large several sector Macro Node B this cost is partly hidden by other costs, such as site rent, power etc, and partly hidden by the transmission trunking effect between the different sectors, but the actual cost is still significant. For smaller base stations, such as a pico Node B with low site rent and low power consumption, the transmission cost will be even more significant. For this type of installations will cost for transmission probably dominate the OpEx, in particular if trying to permanently reserve enough ATM bandwidth between RNC and Node B to meet expected peak data rates for the bursty user data traffic caused by packet oriented services over e.g. the new high speed data channels HSDPA and HSUPA.
3 Summary of the invention The present invention as described herein targets the object of reducing transmission cost for Node B installation costs, in for particular pico Node Bs such as the OneBASE Pico Node Bs. The main idea is to avoid using ATM communication with reserved bandwidth all the way from the RNC to each individual Pico Node B unit and instead using the more inexpensive IP transport for at least the "last mile". The present invention fulfils this object by means of a communications system, a transmission gateway unit and a Node B unit, as defined in the appended claims.
The standardization forum for WCDMA UMTS, also referred to as 3GPP, has attempted to define how control plane signaling (for controlling the base station), and userplane signaling, i.e. control and user data to/from the mobiles connected to the base station, shall be transported using an IP network. 3GPP also discusses the possibility to design a mediation device for translating between the (current) ATM based transport system and an IP based system. However, no details are described for this device, neither how it is to be devised or where it should be located. The defined standard mainly describes how messages are to be mapped on different protocols for the transport over the IP network, but they do not address all the problems which need to be solved when actually implementing IP transport in a real network, including e.g. problems with migration, i.e. when not all the equipment is designed for IP transport; detection, prevention and resolving congestion problems in the IP network; security issues etc. In reality, no system with satisfactory function and performance has hitherto been presented.
The invention described herein enables an operator to use IP for control and data transport to/from radio base stations, without having to migrate all the centralized functions of the network, e.g. the RNC, from ATM. The solution involves the introduction of a "translator" between the ATM connect of the RNC and the IP connect of the base station, similar to the interworking function described by 3GPP. The translator, herein denoted Transmission Gateway Unit TGU, will not only translate between ATM and IP transport but it is also a key element in solving the inevitable problems when using IP transport over a more or less public IP network to transport control and userplane signaling to/from a remotely located radio base station.
In the novel solution described herein the TGU and the radio base station interact to prevent, detect and resolve problems which can arise when using IP transport between the two nodes. In this way the other centralized functions, e.g. the radio network controller RNC, need not to be modified for IP transport and neither for the problems related to this new type of transport mechanism. In fact, the TGU will completely hide this complexity from the RNC, making the RNC believe that the Node B is still connected via ATM. The invention therefore fulfils the object of providing a system which does not require modification of the RNC, even though the basis of the invention may be used also with RNCs modified for direct IP connectivity.
In various embodiments described herein, the TGU and Node B will
- Detect congestion in the IP network.
- Prevent congestion by an intelligent use of resource and bandwidth, e.g. by packing, admission control, prioritization etc. - Determine priority for different kind of traffic between the nodes.
- Resolve congestion situations in the IP network, by an intelligent discard of data and/or connections to mobiles.
- Solve the security issues implied by using a public IP network; This without having to rely on a costly completely encrypted VPN connection between the nodes. - Handle jitter caused by time varying delay in the IP network between the nodes.
Using IP networks for connecting base stations will also facilitate completely new deployment scenarios; Instead of having a few large base stations covering e.g. 6 cells from a single site, it will now be possible to deploy a lot of small base stations, each one only covering a single small cell, often denoted pico- cell, e.g. an office. The currently available RNCs are often designed for the existing large "several sector sites", where the RNC can rely on the Node B itself for part of the administration of the different cells within the Node B. Deploying a large number of pico cells demand more activity from the RNC which may be difficult to implement. For this scenario the TGU can also be expanded to be a kind of "sub network RNC". This novel solution is also described herein. It should be noted that at least parts of the novel functions described herein for preventing, detecting and combating problems related to IP network transport to/from radio base stations also could be implemented as part of an RNC designed for direct IP connections, i.e. when the separate unit for translation between ATM and IP is no longer needed. The solutions described herein will hide the IP transport system completely for the RNC, making it possible to connect an existing RNC (e.g. designed for ATM transport as described by 3GPP release 99, release 4 and/or the ATM option of release 5) with new Node B units designed for IP interconnect.
A known problem with IP based networks is the path delay and variations in path delay; In particular if using wide area IP networks (e.g. public internet). This kind of delay variation may not be big issue for packet oriented services (e.g. HTTP, FTP etc) but may cause problem (e.g. poor perceived quality by the end- user) for circuit switched services like speech and video. This document shows a number of different ways of minimizing also this problem. Other known problems with IP based networks are that they tend to degrade quickly when overloaded (e.g. due to congestion in a router). The solutions described herein also solves these problems.
4 Definitions In this description below the following acronyms are used:
AAL2 ATM Adaptation Layer protocol Type 2
ADSL Asymmetric Digital Subscriber Line
AIS Alarm Indication Signal (ATM)
ALCAP Access Link Control Application Protocol
AMR Adaptive Multi-Rate speech codec
ATM Asynchronous Transfer Mode
BLAN Node B Local Area Network K
BTS Base station = Node B
CBR ATM service class Constant Bit Rate
CFN Connection Frame Number
CID Channel Identifier (ATM)
CS Circuit Switched
DCH Dedicated traffic CHannel
DHCP Dynamic Host Configuration Protocol
DNS Domain Name System
DPCH Dedicated Physical Channel
DSCP DiffServ Code Point
FACH Forward Access Channel
FTP File Transfer Protocol
GPS Global Positioning System
HSDPA High Speed Downlink Packet Access
HSUPA High-Speed Uplink Packet Access
HTTP HyperText Transfer Protocol
IEEE Institute of Electrical and Electronics Engineers
IP Internet Protocol
MPLS Multi Protocol Label Switching
MTU Maximum Transmission Unit
NBAP Node B Application Part
NBU Node B unit, same a Node
Node B base station for 3gpp cell phone systems
Nrt-VBR ATM service class Non real-time VBR
NTP Network Time Protocol
OMC Operation Maintenance Centre
OpEx Operational Expense
PCH Paging Channel
PLL Phase Locked Loop
PS Packet Switched
PVC Permanent Virtual Circuit
RACH Random Access Channel
RAN Radio Access Network
RDI Remote Defect Indication (ATM)
RLC/MAC Radio Link Control and Medium Access Control
RLS Radio Link Set
RNC Radio Network Controller
RRC Radio Resource Control
Rt-VBR ATM service class Real-time Variable Bit Rate SCTP Stream Control Transmission Protocol
SSCOP Service Specific Connection Orientated Protocol
SFN System Frame Number
TGU Transmission Gateway Unit
ToAWE Time of Arrival Of The Window Endpoint
ToAWS Time of Arrival Of The Window Startpoint
UBR ATM service class Unspecified Bit Rate
UDP User Datagram Protocol
Ue User equipment as defined by 3GPP, e.g. a mobile phone
UMTS Universal Mobile Telecommunications System
UTC Coordinated Universal Time
VC Virtual Circuit
VCC Virtual Channel Connection
VCI Virtual Circuit Identifier
VP Virtual Path (ATM)
VPI Virtual Path Identifier (ATM)
VPN Virtual Private Network
WiMax Worldwide Interoperability for Microwave Access xDSL (all types of) Digital Subscriber Lines
3GPP 3rd Generation Partnership Project
Furthermore, the following terminology is used in this description:
Uplink: transfer of information from Node B to RNC, or from Ue to RNC via Node B - Downlink: transfer of information from RNC to Node B, or from RNC to Ue via Node B
Radio link = dedicated channel (DPCH) over the air interface directed to a particular Ue.
Radio link set = a set of radio links directed to the same Ue from different cells within the same multi-sector Node B
Transport bearer = as defined by 3GPP, i.e. signaling connection used to transfer userplane data between RNC and Node B for one user plane transport channel (either common transport channel or dedicated) or for a group of coordinated transport channels. In the ATM network one transport bearer is implemented as a AAL2 transport bearer which is identified by its VPI-VCI-CID combination. In the system described in herein a "transport bearer" can also be mapped on IP connections for transfer to Node B over an IP network.
5 Brief description of the drawings Different embodiments of the invention are described below with reference to the appended drawings, on which:
Fig. 1 schematically illustrates interconnection of an ATM network and an IP network by means of a transmission gateway unit, in accordance with an embodiment of the invention; Fig. 2 schematically illustrates an alternative version of the embodiment of
Fig. I5 where a base station is connected in an IP network implemented over or as part of a public internet, with a firewall connected to protect an OMC;
Fig. 3 schematically illustrates an alternative version of the embodiment of Fig. I5 where a geographically distributed IP network may use a mix of standard broadband internet connections or xDSL over telephone lines to reach individual remotely located Node B units;
Fig. 4 schematically illustrates an embodiment of the invention, in which circuit switched traffic is transported to/from Node B using ATM links, and packet switched user data is transported to/from Node B using IP networks; Fig. 5 schematically illustrates NBAP over IP control plane signaling in accordance with an embodiment of the invention;
Fig. 6 schematically illustrates simplified NBAP over IP, in accordance with an alternative control plane signaling embodiment;
Fig. 7 schematically illustrates TGU transparent AAL2 signaling and ALCAP handling in accordance with an embodiment of the invention;
Fig. 8 schematically illustrates an alternative solution for ALCAP handling, using ALCAP - IP - ALCAP, control plane signaling in accordance with an embodiment of the invention;
Fig. 9 schematically illustrates NBU O&M signaling in accordance with an embodiment of the invention; Fig. 10 schematically illustrates external O&M signaling to TGU directly from OMC in accordance with an embodiment of the invention;
Fig. 11 schematically illustrates control plane routing in an example with two control plane channels, in accordance with an embodiment of the invention; Fig. 12 schematically illustrates user plane signaling in accordance with an embodiment of the invention;
Fig. 13 schematically illustrates user plane routing, in an example with three user plane channels in accordance with an embodiment of the invention;
Fig. 14 schematically illustrates clock synchronization of a Node B in an IP network, in accordance with an embodiment of the invention;
Fig. 15 schematically illustrates O&M VCC to Node B encapsulated in IP/UDP frames, in accordance with an embodiment of the invention;
Fig. 16 schematically illustrates an O&M network in accordance with an embodiment of the invention; and Fig. 17 schematically illustrates an alternative method for transferring
O&M info to/from Node B, in accordance with an embodiment of the invention.
6 Detailed description of preferred embodiments
In order to reduce investments and operating costs for deploying Pico Node B units (also denoted NBU in this document) such as the OneBASE Pico Node B provided by Andrews, we suggest to use IP transport for (at least) the last mile. In addition to a description of the general concept, detailed description is here provided regarding: methods for operation and maintenance (O&M) of the transmission gateway unit (TGU), in particular regarding configuration of transmission related parameters.
Creating priority tags for the IP network
Alternate methods for establishing and releasing transport bearers in TGU and how to connect transport bearers to IP-UDP addresses, including the method of having the configuration more or less hardcoded. How the Node B and TGU can detect, prevent and react on congestion/overload in the IP network.
How to reduce the routing load in the IP network by packing several user data frames in the same IP packet. Security aspects, which are particularly important if the IP network used for
TGU-Node B communication in parts is public.
How to assign and find addresses for Node B and TGU using DHCP and DNS.
How to reduce number of Node B the RNC need to handle by 'Tub aggregation" creating virtual single Node B from several interconnected Node B units.
The implementation described in this document focuses on an implementation using a newly developed "transmission gateway unit" acting as an interface between an ATM-based RNC and one or more Node B units with an IP interface as network interconnect. However, many of the described concepts also may be applied to an implementation where either
RNC connects to the TGU using IP transport and TGU merely act as an intelligent traffic concentrators; or
TGU functionality is included inside the RNC, thus creating an RNC with IP interconnect to the Node Bs.
6.1 Node B local area network (BLAN)
6.1.1 General about BLAN
Since most or all existing RNC implementations depend on ATM transport, we propose to introduce an IP based Node B Local Area Network (BLAN) for at least the "last mile" of the communication towards the Node B units (NBU) as depicted in Fig. 1. This BLAN may either be a true local network, e.g. a network inside a building or a campus, alternatively the BLAN may be a geographically distributed network, more like WAN (wide area network) or even use the existing public IP network In general each Node B requires two logical interfaces towards the Radio Area Network (RAN):
- Iub for traffic control and userplane communication to/from the radio network controller (RNC) - Cub for O&M interface to/from centralized O&M center (OMC).
The BLAN concept allows the major part of the existing RAN (including RNC) to rely on ATM for Lj,, and still use IP transport for at least "the last mile" towards Node B.
Internally BLAN will use standard IP protocols for communication. The transmission interface from BLAN towards the rest of the Radio Area Network (RAN) will be implemented in a Transmission Gateway Unit (TGU); The TGU will translate between external (ATM-RAN) and internal (BLAN) protocols and it will also distribute traffic between the different NBUs in the BLAN.
The Node B units will be connected directly to the BLAN and will be designed to accept Iub (control and userplane) and O&M communication over a standard IP connection, e.g. an Ethernet port.
BLAN is preferably designed to only depend on available standard products such as IP/Ethernet switches and xDSL modems. The design and choice of communication protocols over BLAN facilitate the use of standardized and readily available products without modification. BLAN is preferably prepared for IP-based RAN transport, as specified by 3GPP in release 5. According to 3GPP IPv6 shall be supported and IPv4 is an option. In the first phase IPv4 will be used for local transport within the BLAN but preparations for IPv6 will be made in the NBU and TGU. By using standard IP protocols it will also be possible to have the BLAN functionality implemented on existing IP network infrastructure, both WAN and e.g. office LANs, and sharing this infrastructure with other type of IP traffic.
The re-mapping of ATM transport to IP transport BLAN will be completely invisible for the RNC; The RNC will still only see a number ATM PVCs for NBAP, ALCAP and user data. 6.1.2 Communication from OMC to Node B
Inside the BLAN the Cub (i.e. the O&M interface between OMC and Node B) will be distributed over IP. From OMC to the BLAN, the Cub can either be transported - via IPoA (IP over ATM, as shown in section 6.1.1 above), or
- via IP directly inserted into the BLAN (see below). If this transport is done via public Internet then a firewall or other type of security gateway should be used to protect the BLAN.
O&M communication will be described in more detail later in this document. If the BLAN itself is implemented over a (or as part of) a public internet then a firewall is at least needed to protect the OMC from malicious attacks. An embodiment of such a system design is illustrated in Fig. 2.
6.1.3 Topologies
The BLAN may be a local or geographically distributed network or any mix of these. In any case BLAN should only depend on available standard products such as IP/Ethernet switches and xDSL modems.
6.1.3.1 A local BLAN network
A local BLAN network could e.g. be a campus area or a large office complex requiring a number of Node B units. In this case a "long distance" (probably leased) ATM link will be needed from the RNC to the campus area. Inside the campus/office the TGU will then use IP transport (e.g. over Ethernet, Gigabit Ethernet, WLAN, WiMax ...) to distribute data (Cub and Iub control and userplane) to the different Node B units connected to the BLAN, as in Fig. 1. In this case it is important that the TGU can act as an intelligent concentrator making it possible to save expensive bandwidth (ATM bandwidth) between RNC and TGU by reserving less than needed for a worst case simultaneous peak load on all Node B's connected to the BLAN.
In order to do this the TGU will continuously monitor actual load on the ATM backhaul connection, and before accepting set-up (or reconfiguration) of an AAL2 transport bearer the TGU checks that requested additional bandwidth is available on the backhaul connection. In this way it will be possible for the operator to overbook the backhaul interconnect and get a soft degradation (new calls rejected, but no/few calls dropped or degraded) should an overload situation occur.
A local BLAN preferably makes use of dedicated Ethernet lines between the TGU and the different Node B's; If the Ethernet lines in the BLAN are shared with other IP traffic then this will add delay and delay variations to Iub traffic, which for some end-user services may cause some degradation of performance/quality (e.g. speech and/or video calls).
6.1.3.2 A geographically distributed BLAN network A geographically distributed BLAN embodiment may use a mix of standard broadband internet connections and/or e.g. xDSL over telephone lines to reach individual remotely located Node B units, as illustrated in Fig. 3. Of course, other type of IP transport systems may also be used, e.g. WiMax. For a distributed BLAN it may be possible to choose location of the TGU such the cost for transmission between RNC and TGU is minimized; In such case the requirements on "ATM backhaul trunk efficiency" from TGU to RNC may be reduced significantly.
On the other hand, using wide area IP networks for "long" distance communication may require great care in order to minimize delay and delay variations between TGU and the different Node B.
Connection between xDSL modem and Node B may be a standard Ethernet connection. Alternatively, an xDSL modem may be included inside the Node B.
6.1.3.3 Implementing BLAN over public internet
As described earlier the actual IP transmission and routing network needed to implement the communication to the Node B units can be designed to only depend on functionality and the lower level protocols (IP, DiffServ etc) already implemented and widely used in standard IP networks, implying that existing equipment and also infrastructure can be reused. This use of IP standard equipment and protocols also implies that the same network used for communication to Node B units also can be shared with other IP services, e.g. web surfing etc. In fact, the BLAN can be implemented to use the public internet for communication between TGU and Node B units. The main advantage is that the cost for communication to remote Node B can be very low, and in fact it will be easy for anyone to install a Node B e.g. to an existing office LAN or a broad band connection (e.g. ADSL) at home.
Using public internet for BLAN implies that special precautions are needed to protect the connected TGU and Node B units from any malicious attacks (hacking etc) and also special measures are needed to secure behavior and quality of service, e.g. delay, dropped packets etc. Solutions for these particular issues will be described in more detail below. When implementing the BLAN functionality (i.e. communication to/from Node B) over public internet then there are a couple of different options:
One solution is to implement the BLAN is using VPN-like technology, i.e. the IP traffic between TGU and each Node B is tunneled over a VPN tunnel using "IP over IP". This VPN tunnel may be unsecured (pure tunneling) or secured using encrypted IPsec or similar. Such a solution will be very safe, but unfortunately the solution costs both bandwidth (due to the overhead) and hardware needed to implement the encryption.
As an alternative the IP traffic is between TGU and Node B is run directly on the internet, without any VPN-like tunnels. This is a more efficient solution, and as will be shown later it is reasonably easy to secure the most sensitive parts of the TGU-Node B communication also without having to put VPN and IPsec on the complete traffic between the nodes.
The preferred solution for implementing BLAN over public internet is a mix where sensitive control information (NBAP, ALCAP and O&M) is ran on encrypted IPsec tunnels, while the userplane is ran a IP-over-IP tunnel but not encrypted, to save processing capacity in the gateways
In the preferred solution the Node B itself terminates the IPsec tunnels on the one side. On the other side the TGU and OMC network needs be protected from the public IP network by a security gateway, which terminates the encrypted IPsec tunnels. The IPsec tunnels may also be terminated directly in the TGU and OMC. Using IP over IP and IPsec tunnels also makes it possible to put the Node Bs on an office LAN, i.e. inside the firewall/security gateway (SGW) protecting this office LAN from the public internet. In this case the Node B will probably not have a public IP address, but instead get a NAT address by the firewall/SGW protecting the office LAN. By encapsulating the IUB LAN traffic using IPsec and UDP encapsulation it will then be possible for BLAN functionality to traverse this kind of NAT gateway (RPC3948)
6.1.4 Mixing ATM and IP for "last mile"
6.1.4.1 General
For certain customers with high requirements on guaranteed quality of service for the end-user another option is to mix ATM and IP for the "last mile", as will be described below.
The high speed data services introduced with HSDPA/ HSUPA over the UMTS air interface will require a lot of bandwidth between RNC and the Node B's; Using ATM transport for this could mean very high OpEx, in particular if the operator need/tries to reserve (and lease) bandwidth for worst-case peak load.
Using IP transport for the Iub will reduce cost for transmission significantly, in particular if a "best effort" links can be used most of the path from RNC to Node B. However, the same Node B's will probably simultaneously also carry circuit switched traffic as e.g. speech and/or video calls; Using "long distance" IP transport may in some IP networks (e.g. IP networks with heavy traffic load) be unsuitable for speech/video due to the delay and delay variations between RNC and Node B. In one embodiment, we therefore propose a solution where - circuit switched traffic (e.g. speech, video) is transported to/from Node B using ATM links (with reserved and guaranteed bandwidth), and - packet switched user data (e.g. TCP/IP, HTTP, FTP) is transported to/from
Node B using IP networks.
This is also illustrated in Fig. 4. For this configuration each Node B need to have (at least) two physical communication ports: . - One (or more) port for BP connection, typically a Ethernet port, for connection to BLAN.
- One (or more) port for ATM connection, e.g. STM-I or El, for connection to an ATM backhaul (via ATM routers etc). In this scenario the TGU will be responsible for splitting the data stream to/from the RNC between
- the IP based BLAN (primarily used for packet switched user data)
- ATM backhaul (for speech, video etc).
Iub control plane (NBAP, ALCAP etc) and Cub can be transported to/from Node B using either BLAN or ATM backhaul. (Operator may select)
6.1.4.2 Methods for TGU to choose backhaul path to Node B
There are basically two methods which the TGU can use to choose which communication path (i.e. IP or ATM backhaul) it should use for different transport bearers (user data channels) between RNC and Node B: In some cases all communication over a particular ATM PVC is always of the same type (e.g. if RNC always places delay sensitive traffic like speech on the same ATM PVC), and then the TGU may be (semi-permanently) configured to
- route some PVCs to the Node B directly over the ATM backhaul
- translate some other PVCs to IP traffic communicated to/from Node B over the BLAN.
If the RNC does mix different type of user data traffic on same ATM PVC (or PVCs) then the TGU must choose for each AAL2 transport bearer (CID, part of a PVC) if to send it over the ATM backhaul (map it on other PVC to/from Node B) or if to translate it to IP traffic and send to Node B over BLAN.
6.1.4.3 Dynamic configuration per CID of backhaul path
If the RNC does mix different type of user data traffic on the same ATM PVC, then the TGU need to be dynamically configured which backhaul path (ATM or IP) to use to/from Node B for each AAL2 transport bearer (identified by its CBD). If this path selection information is included in (e.g.) the ALCAP signaling from RNC to Node B (e.g. added as proprietary addition to ALCAP messages), then the TGU could use these messages to configure its routing table. The TGU would also need to inform the Node B on which path to use for a particular transport bearer (IP or ATM/AAL2).
As an alternative, to avoid modification of the RNC the TGU may use a combination of existing information in ALCAP messages to select path. Another (probably better) option is that the Node B selects path based on information received from the RNC in NBAP messages (e.g. radio link setup request) and/or ALCAP messages (e.g. ALCAP Establish request); The advantages with this method are that: - combining information from NBAP and ALCAP gives a better picture on what will be sent on a particular AAL2 transport bearer from the RNC (e.g. if it is to be used for an HSDPA channel).
- For best performance it may be good idea that all transport bearers used for a particular radio link (i.e. one particular cellphone call) should be transported on the same path (either all bearers on ATM or all on IP) to avoid synchronization problems in downlink.
If Node B is responsible for selecting path then it needs to inform TGU which path to use for each transport bearer.
6.2 Frequency reference for Node B
6.2.1 General
A Node B requires a high quality frequency reference for its radio transmitter and receiver, typically 50-100 ppb depending on class of base station. When a Node B is connected to an ATM network using synchronous lines (e.g. El or STM-I) then the Node may derive its frequency reference from the clock of the transmission line. If a Node B is connected only via (e.g.) Ethernet backhaul (e.g. BLAN), then this backhaul cannot provide the required clock signal, and other methods will be needed to enable Node B to fulfill frequency accuracy requirements stated by 3GPP. JL /
6.2.2 Requirements
Requirements on frequency stability and accuracy for a 3GPP Node B are given in TS25.104 and 25.141. Methods recommended by 3GPP for synchronization are specified in TS25.411.
6.2.3 Implementation
6.2.3.1 High quality oscillator in Node B
The general recommendation in 3GPP for network synchronization is to supply a traceable synchronization reference according ITU-T G811. When Ethernet is introduced for Layer 1 interface there is no continues clock traceable to a primary reference clock. 3GPP do not specify how the frequency recovery is done in this case.
The proposed solution is to rely on a highly stable reference oscillator inside the Node B for frequency reference. Even for a reasonable cost it is possible equip each Node B with an internal reference oscillator having a guaranteed short term stability of better than 25 ppb, i.e. well within the 3gpp accuracy requirements of 0.1 ppm for a local area BS.
The main problem with this kind of oscillators is the process of aging; Typically an oscillator like this change 100 ppb each year due to aging. If the aging can be compensated for, then it would be possible to run a Node B without continuous synchronization, e.g. connecting Node B over Ethernet instead of synchronous transmission lines with traceable clock.
6.2.3.2 Compensating for aging of the oscillator
6.2.3.2.1 General
In order for the Node B to be able to compensate for the aging of its internal reference oscillator, the Node B needs to synchronize its internal clock to some external reference source. This reference clock source is herein referred to as a "time server".
If the Node B uses NTP protocol for synchronization with a time server, then the time server can be any existing NTP server in the network. The Node B acquires a time reference either from some time server in the network - or - from the TGU internal frequency reference which is derived from the E1/T1/J1 or STM-I connecting the TGU with the RNC.
There are also other standardized protocols for synchronization of clocks over an IP network, e.g. the IEEE 1588 protocol..
In any case the quality of the synchronization over an IP network is very much depending on the delay variations (jitter) over the IP transport network. A fixed delay is less of a problem, but a jitter (delay variation over time) may be difficult to separate from variations on the Node B internal clock. However, the synchronization accuracy over an IP connection is highly dependent on link delays/variation and processing time. If the time server is located on internet the accuracy is expected to be 1 to 50 ms; Trying to remove this variation with a simple low-pass filter would require time constant of 2-4 weeks. Even with these variations NTP could be used to continuously evaluate the quality of the oscillator and perform slow adjustments to compensate for the aging of oscillator, as described in the following section.
If instead the NTP server is located in the TGU then jitter could be substantially less, in particular for local BLAN with its own Ethernet lines. If the IP network is using IP v6 then jitter could also be decreased by prioritizing timing messages.
If the NTP is in the TGU then the TGU itself needs to have a clock recovery function and this may be
- a GPS clock, or
- a network reference clock, in case of SDH or PDH backbone net, or - an extremely stable oscillator (free running or tracked to an internet NTP server).
6.2.3.2.2 Acquiring good synchronization in spite of jitter on IP network
The method to overcome this problem with jitter is to perform the synchronization quite often and to use statistics from all synchronization attempts to improve the quality of the synchronization. In this solution the Node B will send a new synchronization request to the appointed time server (using standard messages specified for e.g. NTP or IEEE 1588) once every time a period T has lapsed since the last request, where T preferably is a constant period but may also be allowed or controlled to vary. If a NTP server is used a time server than these messages cannot be sent more frequent than once every 16 seconds to each NTP server. Several NTP servers may be used by the same Node B to improve its synchronization characteristics.
There are different methods described in the literature that can be applied in the node B to enable it to use e.g. NTP messages over a jittery IP network enabling it to in a few hours detect even a very small frequency drift (1-lOppb) between its internal clock and a reference time server. Tests have shown that by applying these algorithms the NTP can successfully be used to continuously evaluate the quality of the Node B internal oscillator and perform slow adjustments to compensate for the aging of oscillator, even if variations are in the order of 50-100 ms. Examples of usable algorithms and methods include "Time synchronization over networks using convex closures" by Jean-Marc Berthaud, IEEE Transactions on Networking, 8(2):265-277, April 2000.1, "Clock synchronization algorithms for network measurements" by L. Zhang, Z. Liu, and C. H. Xia in Proceedings of IEEE INFOCOM, June 2002, pp. 160—169, and "Estimation and removal of clock skew from network delay measurements" by S. B. Moon, P. Skelly, and D. Towsley in Proceedings of IEEE INFOCOM, volume 1, pages 227-234, March 1999.
6.2.3.3 Location of the time server
As described above the time may be located inside the TGU; A solution which is particularly appealing if the TGU is located relatively close to the Node B, e.g. when using a local "BLAN".
If instead the TGU is centralized, i.e. using a "long distance" IP network between Node B and TGU, then it is not obvious that the time server is located inside the TGU; a commercially available standard time server could be used instead, e.g. an NTP server. This separated time server may be locked to other primary synchronization sources as described by standards with synchronization hierarchies for time servers using NTP, IEEEl 588 etc. The separated time server may also be implemented by using a GPS receiver, making it more independent of the jitter over the IP network, something particularly useful if the time server can be located close to the Node B units using it for synchronization reference.
The time server is connected to IP network, implying that it may be located wherever suitable (e.g. close to a window if a GPS receiver is used) and can e.g.
- be located at some central location or, better,
- use somewhat simpler time servers and instead distribute a number of them in the network, preferably as close as possible to the Node B's, thus reducing the inevitable timing jitter caused by the IP network, e.g. having each time server serving a group of Node B inside a particular building or group of buildings.
6.3 Establishing and releasing transport bearers
6.3.1 TGU terminating ALCAP
In a fully flexible implementation the TGU will be informed about all transport bearers being established, reconfigured and released. This can be done by TGU terminating the ALCAP connection for all Node B's connected to it. The TGU will then use the contents of the received ALCAP messages to
- configure ATM transport bearers (AAL2 CID)
- select IP connections and UDP ports to use to/from Node B - perform an admission control for ATM connection
- perform admission control for the IP connections
Admission control functions in TGU can be disabled either for ATM network or IP network or for both.
Once TGU has selected UDP/IP port to use for the requested transport bearer then TGU sends a message to Node B to inform Node B on which UDP/IP it should use for a particular "binding ID". Alternatively, TGU may implement a fixed mapping between Binding ID (BID) and UDP port to a particular Node B. 6.3.1.1 Admission control in TGU for ATM network side
The TGU can use the information received in messages in e.g. ALCAP to perform admission control for the ATM network. If the TGU has been configured (or in some other way been informed) about the maximum allowable bandwidth consumption per VP or per VC then TGU can compare the new request with e.g. either
- the sum of current consumption on the VP or VC where the new bearer is assigned by the RNC, or
- with the sum of all previous still active requests for bandwidth on the VP or VC where the new bearer is assigned by the RNC.
The TGU may also continuously monitor the load (e.g. delays, buffer sizes, queues) on the ATM connections and use this to decide if to allow the new ATM transport bearer to be created and started.
6.3.1.2 Admission control in TGU for IP network side Similar as for the ATM side (above) the TGU may also perform admission control for the IP network side. If the TGU has been configured (or in some other way informed) about the maximum allowed bandwidth consumption on the IP network, then TGU can compare new request for additional bandwidth (the new transport bearer) with e.g. either - the sum of current consumption (measured or estimated) of bandwidth to/from the TGU on the IP network interconnect, or
- the sum of previous still active request for bandwidth (calculated or predicted) to/from TGU and IP network interconnect.
Since transport bearer allocation request for ATM network only states requested bandwidth on ATM side, the TGU will need to recalculate bandwidth requirement to take into consideration the different overhead in ATM and IP networks.
If the TGU is requested to act as a e.g. "DiffServ" gateway, then this admission control can be performed according to the contract for different priority levels, see further in the section for "priority on IP network". The TGU can also implement an admission control for preventing overload/congestion of the IP network; In such case the TGU may deny an transport bearer setup because TGU and/or Node B suspects that IP network is overloaded/congested in some point (could be some router in between TGU and Node B and need not necessarily by the access point of Node B or TGU). For further details refer to the section for "handling of congestion in IP network".
6.3.2 Node B terminating ALCAP
An alternative solution is that Node B terminates ALCAP and then sends a message to TGU to setup the transport bearer and request TGU to create a mapping between a certain CID (i.e. transport bearer) and UDP port. In this case the Node B can (if needed/requested by operator) implement admission control both for ATM connections (connections via TGU) and IP network. The same kind of procedures as described in previous section can be implemented also in the Node B.
Admission control of ATM interconnect can be implemented in Node B but will require that Node B has been configured with information about allowed capacity of VP and VC used in the TGU. In a further improvement the Node B can also receive measurements collected by TGU for the ATM interconnect and use this information for its admission control procedures.
Admission control of the IP interconnect (e.g. to avoid IP network overload) can be implemented in the Node B using the same procedures as described in previous chapter.
6.3.3 Fixed mapping CID-UDP port
It is also possible to create a static mapping between VPI-VCI-CID and IP- UDP port. In such case TGU need not be informed about setup-release of a transport bearer. This method is much simpler to implement than the other methods described above. The disadvantage with this method TGU cannot do any admission control (i.e. checking if requested transmission bandwidth on ATM and/or IP network is available).
In this implementation the VPI corresponds to a VP directed to a certain Node B, i.e. the connection between VPI and address in the IP network need to be configured into the TGU. The Node B address in the IP network may either be a fixed IP address or an address assigned via DHCP; In the latter case the TGU can find the IP address using DNS. As soon as TGU knows the connection between VPI and IP, then mapping between VCI-CID and UDP port can be implemented as a simple mathematical formula hard coded into the software in TGU and Node B. Also in this case the Node B can implement admission control using the procedures described in previous sections.
6.4 Creating priority tags for IP network
6.4.1 General about priorities in IP network
In a heavy loaded transmission network it may sometimes be necessary to prioritize between traffic transferred in the network, thus trying to ensure a correct, safe and timely transfer of high priority delay sensitive data stream, in worst case at the cost of delaying or even discarding low priority traffic. For IP based networks the main methods used are called DiffServ (RFC2474, 2475 etc) and MPLS. In any priority method the network routing element depends on that the end points assigns all packets a correct priority. For the TGU-Node B communication this priority needs to be assigned by
TGU for downlink IP traffic and Node B for uplink IP traffic.
6.4.2 Using DiffServ for prioritization
When using DiffServ the prioritization implies that the transmitting node (TGU or Node B) need to set a proper value in the "type of service field" (DSCP) in each IP header according to the priority selected for this particular IP packet.
As an option the TGU and/or Node B could also include required policing and shaping functions required by the DiffServ cloud (i.e. the IP network protected by the DiffServ), in which case the node doing this need to be configured with the "service contract" Similar procedures can be used for other type of prioritization schemes, e.g. when implementing BLAN over MPLS networks instead of pure IP networks 6.4.3 Priority for control information to/from Node B
Data flows containing traffic control information typically should be given a high priority on the IP network; The reason for this is that delayed or lost messages may cause time outs on higher layers (RRC etc), dropped calls etc. Data flow containing operation and maintenance information (O&M) could typically be given a lower priority on the IP network; The reason for this is that most of these flows are not real time critical (e.g. software downloads) and that the communication either is protected by retransmission protocols as TCP and FTP or can be protected on application layer, e.g. by originating node resending a request message if no reply was received within a defined time.
6.4.4 Priority for user plane derived from ATM service class
For an ATM based Iub communication the main method used for creating prioritization for user data flows (DCH etc carried over 3GPP frame protocol) is to assign each "AAL2 transport bearer" (e.g. a bearer assigned to a particular DCH or set of coordinated DCHs) to a VCC (where the VCC is identified by its VPI and VCI) with a given ATM service class. Different networks supports different number and types of ATM service classes; but typically an ATM network supports e.g.
- CBR (constant bit rate) UBR (unspecified bit rate) - UBR+
Rt-VBR (real-time variable bit rate)
- Nrt-VBR (non-real-time VBR)
Each of these types correspond to a priority level defined by the network; Type of priority and handling of priority differs between different network implementations.
For downlink data streams the TGU knows which VCC received the data from the RNC, and can therefore use any information about ATM service class of the VCC to assign a priority for the IP network.
For uplink data stream the Node B need to know the ATM service class of the VCC which the TGU will map the particular user data on. Node B can achieve this information in a number of different ways: - if Node B terminates ALCAP then the ALCAP message itself informs Node B on which VCC the transport bearer will be assigned. In that case if Node B also knows the ATM service class of that VCC then it can assign IP network priority according to this. - If Node B gets information about assigned VCC in some other way, either from RNC (via e.g. NBAP) or from TGU (via some message originating from TGU) then if Node B knows ATM service class for the VCC then it can assign IP network priority according to this.
- Node B may get information about ATM service class or IP network priority from the TGU via some kind of control message sent from TGU to Node B
- When Node B gets downlink userplane data from TGU this data is marked with a priority, and Node B can simply use the same priority for the uplink information associated with the same AAL2 transport bearer, i.e. data mapped to the same UDP port.
6.4.5 Alternative methods for deciding on userplane priority
Sometimes ATM service classes for VCC cannot be used for prioritization, e.g. because the ATM network or RNC implementation does not use this feature. In such case IP network priority could selected using other means, e.g.:
- If RNC assigns "frame handling priority" for the DCH in the NBAP messages for radio link setup/reconfigure etc then this priority may be used also for the IP network
- The TGU and/or Node B could calculate IP network Priority from information received in ALCAP
- Node B and TGU can select priority level depending on knowledge about the type of data that will be transported on that particular transport bearer, e.g.:
• Assigning higher priority for "common transport channels", mainly because these contains time critical information
• medium priority for dedicated channels
• lowest priority to transport bearers associated with HSDPA or similar services. The Node B/TGU may use other information received from RNC to also differentiate priorities between different type of dedicated channels, e.g. assigning higher priority to "speech calls" and/or other type of circuit switched services; Node B can deduce type of end-user server by looking at detailed parameters for radio access bearer RAB when RNC configures/reconfigures the radio link, e.g. number and type of transport channels, transport formats for transport channels and the ToAws-ToAwe (i.e. timing window for Node B reception of downlink userplane data on Iub from RNC). In particular the timing window gives Node B a very good hint about the priority and timing constraints RNC wants to assign a particular transport bearer.
For common transport channels (e.g. RACH, FACH and PCH) fixed priority levels may be assigned either hard coded in the software in TGU and Node B or defined by the operator as part of the configuration of the Node B and/or TGU. This predefined priority level could be an absolute level, or some kind of offset related to other types of traffic.
6.5 Handling of congestion in IP network
6.5.1 General about congestion
TGU will be interfaced with ATM towards RNC and with IP network towards one or a number of Node Bs. Between the TGU and Node Bs there will be several IP routers that handle the traffic between TGU and several Node B. The same IP network may also be handling other type IP traffic, e.g. if the network is a WAN/LAN used for public internet. Typically, routers in IP networks respond to congestion (overload) by delaying and/or dropping datagrams.
For IP networks standard solutions exist for handling priority between different kind of IP traffic, e.g. DiffServ (RFC2475 etc.); However, these are not always used. And even if used they cannot always solve the problem with overload/congestion.
For TGU-Node B communication the problem with a standard IP network solution is that in an overload/congestion situation IP packets will be delayed/dropped in a random fashion and neither Node B nor TGU/RNC will be informed about this, but only see the effects.
Furthermore, most of the control plane data and user plane data exchanged on Iub between RNC and Node B (possibly via TGU) are protected by different kind of retransmission protocols, e.g. SCTP or SSCOP for NBAP, and retransmission functions for user plane traffic implemented in RLC/MAC function RNC and Ue. This retransmission could in worst case cause an avalanche in a congestion situation causing break down of segments of the network.
To avoid this, it is important that TGU and/or RNC plus Node B can get an early warning of a potential congestion situation and take action to decrease the traffic before service quality degrades to much; If the RNC/TGU and/or Node B manage to reduce traffic on the IP network in a controlled way then a disaster situation can be avoided and the IP network can faster recover from the congestion, and not being overloaded by e.g. retransmissions. If all (or at least the critical) routers in the network monitor and report the load situation to some central management element, then TGU and/or Node B may be informed about this in order to take necessary action to reduce their load on the IP network.
If information from such kind of centralized supervision is not available for the TGU and/or Node B units, then these nodes themselves need to be able to detect a potential overload problem. A solution to this latter problem will be described below.
6.5.2 How TGU-Node B detects congestion/overload of IP network
In order to detect a potential congestion/overload of the IP network, both TGU and Node B need a method for
- measuring IP network delay for the IP network path/paths between the Node B and TGU and vice versa, and/or
- detecting that IP messages have been dropped in the IP network between the TGU and Node B or vice versa. Where possible, both methods should be used simultaneously, thus making the TGU-Node B IP Iub supervision independent of the congestion policies used by routers in the IP network, i.e. if they mainly drop or mainly delay traffic. However, in some applications it may be necessary/preferred to only implement either delay measurements or counting of lost/dropped IP packets.
6.5.2.1 How to measure quality of the IP network A Node B always needs to have a very stable internal clock; The reason for this is that this same clock is also used as frequency reference for the radio transmitter/ receiver; 3GPP states that this clock must have a accuracy better than 100 ppb for a local area Node B and better than 50 ppb for a medium range-wide area Node B. The TGU should also implement a high accuracy internal clock; This would not require any expensive hardware since this clock can be frequency locked to the physical links (e.g. STM-I, El) used for the ATM network between TGU and RNC. A more detailed description related to these features has already been provided above.
These high stability internal clocks in TGU and Node B's should either by synchronized to each other or synchronized towards some external reference clocks (e.g. NTP servers) aligned with some global time e.g. UTC. There are a number of standard methods which can be used for synchronizing clocks, e.g. NTP
IEEE 1588 - 3GPP frame protocol for "node synchronization" (TS25.402)
Due to the high long time stability of the Node B and TGU internal clock (the latter mainly due to its lock to the physical lines of the ATM network) this synchronization can be made with very good absolute accuracy, as already noted.
6.5.2.1.1 Measuring delay on IP network
To measure delay on the IP network between TGU and Node B:
- TGU sends a message to Node B and just before sending it to the IP network the TGU stamps the message with current reading of its internal clock.
- When the Node B receives this message it immediately stamps it with its internal clock. - Then IP network delay from TGU to Node B (the downlink) can be calculated.
The same procedure can be used for measuring uplink delay by having Node B send a time stamped message to the TGU. Protocols to be used for measuring the delay are described later in this document.
6.5.2.1.2 Measuring number of lost IP packs
In order to estimate number of lost IP packets between TGU-Node B and Node B-TGU then each sender and receiver needs to continuously count number of block sent and received. With some periodicity, e.g. once every 5 seconds, TGU should send a status message to Node B telling it how many IP packets was sent since last status message and Node B compares that with number of IP packet received during the same period. The difference between the counter of packet sent from TGU and received in Node B gives the Node B information about the number of IP packets dropped/lost by the IP network.
The same procedure should also be used for the uplink direction, i.e. the Node B counting number of IP packet sent and TGU counting number of IP packets received.
The counters exchanged could of course also be "accumulated number of IP packets sent and received" thus removing the problem with "sampling periods" not being identical in sending and receiving node.
6.5.2.1.3 Implementing the measurements 6.5.2.1.3.1 Preferred method The preferred method of implementing the above mentioned measurements is to introduce a new signaling protocol between TGU and Node B. This protocol should run on top of UDP-IP in the same way as the regular user plane signaling between the nodes. A reasonably simple proprietary protocol with a only few messages can be used. An alternate option could be to use the "node synchronization messages" defined by 3GPP in TS25.427 and TS25.435 6.5.2.1.3.2 Alternative 1: Reusing other synchronization protocols
It is also possible to use existing protocols originally intended for IP network synchronization (NTP etc) for the above mentioned measurements, but in that case it would probably be necessary to modify "protocol field" in the IP header to prevent IP routers from processing these messages differently (e.g. assigning them higher priority) than the IP packets used for user plane data (3GPP frame protocol) and control plane data (NBAP, ALCAP ...)
6.5.2.1.3.3 Alternative 2: Adding extra header to FP frames on UDP/IP
Another option for implementing these measurements is to not map 3GPP frame protocol frames (user data frames) directly on top of UDP/IP as defined by 3GPP, but instead also add an extra header containing a counter and a timestamp. In such a case the TGU would need to add this extra header on Frame protocol frames in downlink and Node B to check them. For uplink the Node B would add the extra header, TGU would check them and remove the extra header before transmitting the messages on the ATM network.
For a 3GPP Iub implementation we prefer not to use this method; The main reason being that it would violate the 3GPP standard regarding IP transport thus a potential risk for problems with incompatible equipment (e.g. line listeners used for debugging of traffic) and the compatibility problem when migrating removing TGU in parts of the network and replacing with RNC with direct IP connections.
6.5.2.1.3.4 Alternative 3: Using existing fields in 3GPP FP frames Each 3GPP userplane frame protocol message containing userdata is by
RNC and Node B stamped either with "system frame number" (SFN, for common channels) and "connection frame number" (CFN, for dedicated channels).
In downlink the Node B shall check that CFN of a particular message falls within a given capture window ToAWS - ToAWE as defined in TS25.402. Any variations in this could be used by Node B to detect if delay is increasing in the network from RNC to Node B. In the same way the RNC can detect an increasing delay. However, this method is difficult to use in TGU because then TGU would need to know the relation between SFN and CFN for each transport bearer (something could be solved by Node B sending a message to TGU about this). There are a number of other small drawbacks with this method:
- The measurement depends on Node B and RNC actually transmitting the data at a constant offset to SFN/CFN, i.e. any transmit timing variations caused by load inside RNC and/or Node B could be misinterpreted as delay variations on the IP network. However, this node internal delay is most probably rather small compared to the delays of the IP network.
- the resolution in SFN/CFN is only "one frame" (=10ms).
- There is no way of detecting dropped/lost messages
In essence, CFN/SFN stamping of 3GPP Frame protocol frames could be used for implementing the measurements needed, but it is much simpler to get the information by introducing completely new and dedicated messages between TGU and Node B as described with the preferred method above.
6.5.2.2 Detecting a potential overload/congestion of the IP network
The preferred implementation is that Node B and TGU by sending messages over the IP network periodically exchange information such that both TGU and Node B keeps statistics of IP network delay, delay variation and lost IP packet for both uplink and downlink.
If both TGU and Node B have the same kind of information then both nodes can take actions immediately if a suspected overload/congestion situation is detected. If both delay and lost IP packets are measured over the IP network then each transmitting node (i.e. TGU for downlink and Node B for uplink) should:
- monitor the delay and delay variation reported by the receiving node,
- monitor amount of lost IP packets reported by the receiving node.
Should delay or delay variation or loss of IP packet suddenly raise above the normal value OR above an operator configured threshold then the node responsible for transmitting in the degraded direction shall immediately take action to resolve the situation, as described below. Preferably the supervision shall use at least two thresholds for the supervision:
- a warning level, indicating that at least something should be done to prevent the situation from getting worse, - a critical level, indicating that the load on the IP network must be decreased significantly immediately.
At warning level the transmitting node should start dropping frame protocol (FP frames) as described below. At critical level either amount of dropped FP frames should increase significantly or connections needs to be dropped, as described below.
6.5.2.3 Congestion control when using priorities on IP network
Sometimes, the data between TGU and Node B will be separated on different priority levels on the IP network (e.g. using DiffServ to priorities). If different priority levels is used on the IP network between TGU and Node B then supervision for congestion/overload should be done separately for each priority level, thus making it possible to detect congestion e.g. only affecting "low priority traffic" (e.g. user data for the packet switched (PS) end-user services). In such case:
- the nodes (TGU and Node B) needs to have separate counters of sent and received IP packets per priority level. - Message between TGU and Node B for measuring delay of the IP network needs to be sent on each IP priority level
Measuring delay and/or lost IP packets per priority level also makes it possible for TGU and/or Node B to implement a congestion/overload warning with different thresholds for different type of traffic, e.g.: - tolerating worse IP network behavior for PS userplane than for e.g. high priority circuit switched (CS) service like speech,
- tolerating less dropped packets but somewhat longer delay for control plane signaling as e.g. NBAP over Iub 6.5.3 Resolving a potential overload/congestion of the IP network
In order to resolve a potential overload/congestion problem of the IP network must Node B and/or TGU reduce the amount of data the node transmits. There are some different methods for reducing amount of transmitted data: - selectively discarding userplane frame protocol (FP) frames,
- dropping connections,
- admission control
These methods will be described below.
6.5.3.1 Selectively discarding FP frames The first step in a suspected congestion/overload situation is that the transmitting node selects some FP frames which are discarded and not sent on to the IP network. This must be done by Node B for uplink data and TGU for downlink data; The main advantage with this is that if congestion/overload is only present in one direction of the IP network (e.g. from TGU to Node B) then the other direction is unharmed.
In the simplest implementation the transmitting node selects FP frames to drop from the transport bearers with assigned lowest priority, i.e. transport bearers which will be carried on ATM connections with lower service class. No frames should be dropped from UDP ports dedicated for Control plane information, e.g. NBAP and ALCAP.
In a more advanced implementation the transmitting node (TGU in downlink and Node B in uplink) also examines the individual FP frames and avoid discarding FP frames marked with Frame Type=Control, i.e. only discarding FP frames containing user data. If the RNC (which not all RNC does) via NBAP has assigned a "frame handling priority" for DCH/transport bearer then this priority field should be used to select which frames to discard. This can be done by Node B for uplink (how has received this information from RNC via NBAP) and/or TGU for downlink (but then TGU needs to get this information from RNC, possible via proprietary signaling from Node B who receive this information in NBAP messages from RNC) 6.5.3.2 Dropping connections
Should not the method by selectively discarding FP frames resolve an overload situation, then instead of discarding more and more FP Frames the next step is to drop complete radio link sets (RLS) and/or HSDPA data flows, including all transport bearers associated with this.
In case of a TGU in front of an RNC then the RNC is not aware of the overload/congestion supervision performed by Node B and TGU then either Node B or TGU or both need to decide and take action. In such case the preferred implementation is that the decision about dropping transport bearers and/or complete RL/RLS is taken by Node B .
To drop transport bearers the Node B needs to selects which dedicated radio link radio link sets (RLS), and/or HSDPA data flow dedicated to a certain mobile should be dropped. Decision on what to drop can be based on
- allocation/retention priority assigned by RNC (if RNC assigns this) frame handling priority (if RNC assigns this),
- priorities indicated by ALCAP message from RNC (if RNC uses this),
- priorities indicated indirectly by which ATM service class has been assigned by RNC for the transport bearers associated with this RLS/HSDPA data flow. If none of this priority information is available for the Node B, then the
Node B may select to drop e.g. any RLS or HSDPA data flow.
The preferred method for dropping RLS / HSDPA data flows is that Node B sends an NBAP message (e.g. NBAP message Radio link failure or Error indication) with proper cause value to the RNC. After this the RNC should as soon as possible remove the RLS including all transport bearers.
As an option the Node B could also send a message directly to TGU asking the TGU to stop transferring data corresponding the transport bearers of the RLS; However, this is in most cases probably not needed.
In a disaster like situation, the TGU could also autonomously decide to drop a number of downlink data connection, i.e. discarding all downlink data for those transport bearers. 6.5.3.3 Admission control
As described earlier in this document, admission control is a way for a node to deny a request for new or modified bandwidth, e.g. when RNC tries to set up a new transport bearer and/or modify reserved bandwidth for an existing. Where admission control is implemented, this should be used also to combat overload/congestion by denying new services (e.g. new calls) to be setup/increased if the IP network is already under stress and close to overload.
The admission control in this case can be performed by e.g.
- TGU denying ALCAP transport bearer establish/reconfiguration request (when TGU terminates ALCAP),
- Node B denying NBAP setup/reconfiguration messages,
- Node B denying ALCAP transport bearer establish/reconfiguration request (when Node B terminates ALCAP)
6.5.3.4 Informing the RNC about the problem The Node B can use the NBAP message radio link failure to indicate to
RNC that a radio link set needs to be dropped due to problem on the IP network between Node B and TGU.
If the RNC has implemented NBAP message ERROR INDICATION (not implemented by all RNC) then instead this can be used from Node B to RNC to inform RNC that radio link set needs to be dropped due to problems on IP network.
If the RNC implements ATM F4 or F5 flows, then TGU may inform the RNC about a problem on IP network by issuing AIS or RDI on or more of the ATM VP or VC.
6.6 FP packing to reduce load on IP network The transfer of user plane data over Iub has been designed for optimum quality of service e.g. for delay sensitive services like speech. In order to achieve this 3GPP did choose as in the first phase to design for ATM connections between RNC and Node B.
Most of the user plane frame protocol (FP) packets transferred over Iub are relatively small, typically less than 45 bytes. Instead the frequency is rather high, e.g. for each AMR speech call is one FP packet transferred each 20ms. For a single cell carrier Node B, maximum number of simultaneous speech calls is about 100, which gives a total rate of FP frames in excess of 4kHz. It is not obvious that this kind of solutions works well for an IP based network, where typically the IP packets are larger (Max MTU in the order of 1500bytes) and less frequent. For cases where the routing capacity (number of IP packets routed per second) of the IP network is limiting it would be much better if the end-points (in our case TGU and Node B), reduced the amount of packets and instead made each packet bigger.
In normal operation the TGU and Node B maps one FP message (containing e.g. one AMR speech frame) onto one UDP-IP packet for IP transport network. This method makes the transform between ATM and IP easy, but at the cost of unnecessary high frequency of small IP packets on the IP network. However, this is the preferred method since this is what is recommended by 3GPP. As an option the TGU and Node B may also pack several FP messages into the same UDP-IP packet.
6.6.1 Method for Frame protocol packing
For packing of FP messages in downlink the TGU has a small internal buffer with a size corresponding to max MTU of the IP network. TGU and/or Node B should be configured with MAX MTU of the IP network in order to assure that the transmitting node does not generate IP packets longer than MAX MTU. All FP messages incoming to TGU from the ATM network will be added to this buffer in the same sequence as they arrive from the ATM network.
The downlink buffer in TGU is sent as a message over the IP network to Node B as soon as either - the first message in the current buffer has been stored in the buffer more than an allowed maximum delay time, typically <5 ms, or
- next incoming FP message from ATM cannot be added to the buffer because it is full.
When receiving a packed message from the IP network, the Node B can then unpack the message and extract the individual FP messages. For uplink the Node B performs the same packing process, with the difference that in this case the FP messages has been produced by uplink signal processing inside the Node B. If priority is used in the IP network, e.g. using DiffServ, then TGU and Node B should implement a separate buffering and packing for each priority level.
6.7 Operation and maintenance Existing 3GPP networks are based on ATM transmission from RNC to
Node B; this implies that all tools for operation and maintenance (O&M) of these networks are designed and optimized for this kind of networks. We have developed a transmission gateway unit (TGU) which enables the operator to use IP transport to the base station, also called BTS or Node B, while still having an RNC with an ATM interface.
The best solution is of course to make TGU its O&M completely incorporated and visible in the O&M systems for the network; However, this may imply a substantial amount work to update all these "ATM oriented" O&M tools to also take into account a new node (the TGU) and an IP network
6.7.1 O&M of TGU directly
In the preferred solution, the TGU will be configured with all data needed for the termination of ATM PVCs and information on how data shall be transformed into the IP network. For each Node B the TGU needs to be configured with at least: - the address of the Node B in the IP network, could be a fixed IP address or a logical name which can be used for lookup in using DNS, and - detailed data for all ATM connections (VP-VC) intended for this Node B; This data includes e.g. ATM service class, parameters for the VC etc
6.7.2 O&M of TGU via Node B To minimize the impact of the existing O&M tools, as an alternate solution the configuration data for ATM parameters are sent to the Node B (as had the node been connected via ATM). When the Node B receives this data it determines the IP address of the TGU it has been assigned as interface to ATM worlds, and then sends the configuration parameters for ATM to the TGU. The same idea can be used for fault management and performance management, i.e. the Node B collects from the TGU data related to the Node B's interface to ATM world; Then the Node B reports this information to the central O&M system, making the TGU virtually invisible (but still managed) in the O&M network. A further advantage of this method is that since the Node B holds all data, then if a TGU fails (or Node B fails to contact a particular TGU) then the Node B can instead try to establish contact with another TGU (a hot standby). When the Node B has sent all configuration to that TGU then the only remaining to action for a switch over would be to change the ATM network switching such that the VP 's are switched to this new TGU.
6.8 Security aspects
If the IP network used for communication between TGU and Node B is in someway accessible for the public, then some kind of protection will be need to prevent intrusion and disturbance of the operation of the nodes. The best way to achieve this would be to put all the IP communication between TGU and Node B on a VPN connection, preferably protected by IPsec or similar. However, this may be an overkill and instead the protection may be depend on:
- that user plane data between RNC and Ue is mostly protected by encryption implemented by the RLC/MAC function in RNC and Ue; i.e. this data is already protected from eavesdropping and modification.
- The traffic control information between RNC and Node B (NBAP, ALCAP etc) and between TGU and Node B (if and where used) are normally not protected other by the fact that they are completely binary and in a uncommon format. For increased protection these particular data flows may be encrypted between TGU and Node B. There are a number of standard methods for achieving this kind of encryption, e.g. MD5 algorithms scrambling the bits transferred.
- The O&M information could preferably be encrypted using IPsec.
For further protection the TGU and Node B may implement a firewall using e.g. an IP address filter protecting the nodes from malicious traffic. An optional solution would be to put a security gateway either in front of the TGU or incorporate this into the TGU. This security gateway function could then terminate VPN tunnels from the Node B, i.e. the node B terminates the other end of the tunnel. On this VPN tunnels e.g. IPsec in ESP tunnel mode could used both for control plane (NBAP, ALCAP and O&M). For Userplane (corresponding to the AAL2 transport bearers) transport over the IP network the frame protocol messages could also be transported over a tunnel, either with encryption (e.g. IPsec ESP) or with null-capsulation, i.e. without additional encryption for the IP network.
6.9 Use of DHCP
6.9.1 Altl: Use of DHCP by Node Bs for O AM/User plane IP addresses
In option 1 there must be one OAM VCC configured for each Node B in the TGU. All AAL5 frames received on OAM VCC are encapsulated in IP/UDP frames (IPoATM over UDP/IP) by the TGU and sent to Node B's. This is illustrated in Figs 15 and 16. In this option Node B both OAM and UP IP address can be configured with
DHCP. For UP IP address the TGU may act as a DHCP server if needed but this requires DHCP relay agents (one for each hop) in the IP network between TGU and Node B. For the OAM IP address the "normal" DHCP server in OAM network may be used. The precondition is that the lower IP address (UP IP) is already configured since it will be used to carry all packets to the TGU. The Node B will reply to Inverse ATM ARP requests sent on the VCC. Since IP over ATM is encapsulated over UDP/IP it may be required to decrease the MTU because of the extra header (28 bytes).
6.9.2 AIt2: Use of DHCP by Node Bs for OAM/User plane IP addresses In option 2 there must be one OAM VCC configured for each Node B in the TGU. For each VCC the TGU will have its own IP address (IP over ATM). Packets sent to the TGU IP address will be forwarded by the TGU to the Node B except for Inverse ATM ARP which the TGU will respond to. This is illustrated in Fig. 17. When forwarding the packets to the Node B the destination IP address is modified to the Node B UP IP address. When forwarding the packets from the Node B the TGU looks at the IP source address to select VCC to forward the packet. Before forwarding the source IP address is modified to the TGU IP over ATM address. In this option the Node B only has 1 IP address (UP IP) but from the OAM network it looks like 2 addresses since all IP packets to the TGU are forwarded to the Node B. In this option Node B both OAM and UP IP address can be configured with DHCP.
For UP IP address the TGU may act as a DHCP server if needed but this requires DHCP relay agents (one for each hop) in the IP network between TGU and Node B. For the OAM IP (TGU IP) address the "normal" DHCP server in OAM network may be used. In this option the TGU will act as a DHCP client to configure the address. The Node B does not have to know this IP address.
Another solution is to use the TGU as a DHCP server for both user plane IP addresses and OAM IP addresses. The TGU is will still answer to InvARP with the OAM IP address provided to each Node B..
6.10 Handling of jitter for userplane
6.10.1 General
The user plane transport over Iub as specified by 3GPP is mainly/originally intended for transport over ATM networks. ATM networks are mostly designed to be able to provide a guaranteed quality of service in terms of loss of frames and timeliness in delivery, i.e. the jitter in transport delay is generally assumed to be rather low (in the orders of a few ms for prioritized bearers).
Seen from Node B a jitter in time of arrival of userplane data from RNC can be caused of either RNC not sending the data with correct timing (e.g. due varying processing and routing load inside the RNC) or because time varying delay in transport network between RNC and Node B. The same applies for uplink data, i.e. the transmit time from may vary due to internal load of the Node B and also delay over transport network may vary. In order to cope with this 3GPP does in TS25.402 (and in TS25.427 and TS25.435) specifies that the Node B shall have "time of arrival window" for capturing downlink userplane FP frames received on Iub where each userplane FP frame is clearly marked with the CFN (connection frame number) or SFN (system frame number) when that particular frame should result in downlink data transmitted over the air interface (Uu). This "time of arrival window" is given by the RNC to Node B as ToAWS and ToAWE for each transport bearer (TS25.402), i.e. each AAL2 transport bearer (when ATM used as backhaul to the Node B) carrying data for a transport channel or group of coordinated transport channels. This time of arrival window for userplane can also be used for handling of jitter of an IP based interconnect to the Node B; However, there are some problems related to this:
- it is most probable that the window size needs to be increased due to the larger jitter when using IP instead of ATM for connection to Node B. - Furthermore, some RNC types does for some services try to reduce downlink delay by sending userplane data as late as possible, i.e. the RNC tries to send downlink FP frames with such timing that they arrive as close as LTOA as possible (see TS25.402 section 7.2), giving very little space for jitter. (The method for RNC to know time of arrival in Node B is to use the FP messages "UL synchronization" and "DL synchronization specified in TS25.427 and
435).
6.10.2 Increasing time of arrival window in downlink
When using IP transport to the Node B the RNC can be modified to ask Node B to set up larger time of arrival windows. Time of arrival windows configured by RNC may be possible to adjust for a certain implementation of the transport network, but this is an implementation choice done by RNC designer/vendor.
For some RNC implementations it may be possible to adjust the settings of ToAWS - ToAWE per Node B. In other RNC a changed value will be used for all Node B' s connected to the same RNC, which may cause problem if an RNC mixes Node Bs connected directly by ATM and Node B connected over IP via a TGU. In the latter case it may be necessary for the Node B to on its own increase the ToAWS - ToAWE setting received from RNC. This procedure can be useful even if the RNC can have its ToAWS- ToAWE settings adjusted to match the behaviour of the IP network being used. The actual values used by Node B could in this way be adaptable, i.e. Node B uses statistics (or other information) about the expected jitter of the network in order to decide size and position of the window. The Node B may even adjust the window during operation to match the current behaviour of the IP network.
In any of these cases it may be advantageous/necessary to use different window sizes/position for different kind of services and/or priority levels on the IP network. If Node B uses statistics to move-resize the time of arrival window, then this statistics calculation should be done per priority level on the IP network.
In a future development (regardless if Node B adjusts windows or not) this kind of timing data collected by the Node B may be reported back to the RNC thus giving the RNC the possibility to already during set up of new channel select a suitable time of arrival window for the associated transport bearer (carried over IP or ATM) . The reporting mechanisms in such a case could also be implemented via an existing "performance management"/"performance counters" reporting system where Node B reports collected data to e.g. an OMC (operation and maintenance center) supervising the network.
6.10.3 Jitter buffer in TGU for uplink
In the same as way for downlink, the RNC should have some kind of jitter buffers in uplink. However, 3GPP does not state any particular requirements regarding the implementation of those, and hence the implementation will be different between different manufacturers. The time of arrival windows in RNC released today are most probably designed and optimized for ATM connect to the Node Bs. In some implementations of RNCs the window size may be possible to adjust by e.g. configuration data. But in some implementations windows may be hard coded in the RNC and not possible to tweak for certain implementation of the network. When uplink userplane data from all or some of the Node B connected to RNC is transported partly over an IP network, then it may be necessary to modify the time of arrival windows used by the RNC. If this is not done uplink userplane FP frames may be lost due to incorrect time of arrival in RNC. If the windows in the RNC cannot be adjusted to match the requirements imposed by the IP transport network, then the TGU can implement a "jitter buffer" making it possible to give a better and more stable timing of uplink data towards the RNC.
7 Protocol stacks
General comment: ATM transport can be performed on different type of media, on example is STM-I, another example is using a single El line, yet another example is to use multiple El lines either with or without IMA between the lines. In the pictures below any protocol used for transport of ATM shall be regarded as examples usable in different embodiment. Same applies to the IP transport where most pictures indicate that IP is transported over Ethernet, but of course other type of media can be used for transport of the IP traffic e.g. Gigabit Ethernet, Wireless Lan, WiMax etc.
7.1 NBAP control plane signaling
An RNC using R99/R4 will put NBAP control plane signaling on a dedicated AAL5 PVC using SSCOP. A 3gpp aligned solution for the BLAN would, in one embodiment as illustrated in Fig. 5, look like this:
- For NBAP control plane signaling the TGU translates between the SSCOP and SCTP protocols.
- NBAP is terminated in the NODE Bs.
An intermediate development step or alternative solution is to use SSCOP over UDP-IP for NBAP, as illustrated in the embodiment of Fig. 6. In such case
- For NBAP control plane signaling the TGU performs forwarding of AAL5 frames to UDP frames.
- NBAP is terminated in the NODE Bs.
- One specific PVC (PHY/VPI/VCI) is mapped to a specific UDP port and IP address. The same port could be used at both TGU and NODE B side. - Control plane routing is preferably performed as described in chapter 7.5.
7.2 Transport bearer setup, control plane signaling
7.2.1 AAL2 signaling protocol (ALCAP)
An RNC using R99/R4 will use ALCAP to control transport bearer allocation etc. The ALCAP will be placed on a dedicated AAL5 PVC using SSCOP5 as is also illustrated by the AAL2 signaling (ALCAP) in Fig. 7:
- For AAL2 signaling (ALCAP) the TGU performs forwarding of AAL5 frames to UDP frames.
- ALCAP is terminated in the NODE Bs. - One specific PVC (PHY/VPI/VCI) shall be mapped to a specific UDP port and IP address. The same port could be used at both TGU and NODE B side.
- Control plane routing is preferably performed as described in chapter 7.5.
7.2.2 Alternative solution for ALCAP handling
As an alternative to solution described in section 7.2.1 the TGU may terminate ALCAP from RNC, also illustrated by the ALCAP-IP-ALCAP, control plane signaling of Fig. 8. In such case:
- For the ATM transport control signaling on the network side TGU shall support ALCAP [ITU Q2630.2].
- For the internal IP transport the TGU shall support IP-ALCAP [ITU Q2631.1].
- The inter-work between IP-ALCAP and ALCAP shall be done according to [ITU Q2632.1]
- Control plane routing is preferably performed as described in chapter 7.5.
7.2.3 TGU - NBU interwork For dynamic establishment and release of inter-working connection for user data a new proprietary TGU inter- working signaling protocol (TISP) will be used. The TGU shall act as the server for this protocol and NODE B as a client. The server port for this shall be configurable. The protocol can be used on either UDP or TCP. Userplane signaling is described in chapter 7.6 and 7.7. As an alternative, TGU-NBU Interwork may be based on IP-ALCAP as described in section 7.2.2.
The TGU-NBU interwork can be omitted in some implementations, in particular if TGU implements a static mapping of VPI-VCI-CID versus IP address- UDP port. Mapping of VPI vs. IP address will then require that:
- either the Node B has been assigned a fixed IP address, or
- that TGU can lookup the IP address of the Node B using DNS, or
- that Node B at start up registers its IP address in the TGU it has been assigned via e.g. configuration data stored in the unit.
7.2.3.1 Inter-working setup request/acknowledge/reject
The client (NBU) will use an "inter-working setup request" message to request establishment of a new inter-working connection, i.e. connecting a AAL2 transport bearer from ATM backhaul with IP-UDP transport bearer (UDP port) on BLAN. Supplied in the request will be PHY, VPI, VCI, CID, transaction id, downlink UDP port and downlink IP address. Uplink parameters are set to zero. The setup request should also include information on:
- requested priority (allowing TGU to prioritize between bearers),
- requested bandwidth up and downlink. If the server (TGU) can accept the new connection then it will allocate an uplink UDP port and IP address and create a inter-working connection between AAL2 SSSAR CID and UDP/IP. If operation is successful the TGU will respond with a "inter-working setup acknowledge" message. The message will include the allocated downlink parameters and the parameters supplied in the request. If for some reason an inter-working can not be established the TGU shall respond with an "inter- working setup reject" message. The message will include a fault code value and the parameters supplied in the request. Examples for reasons on reject:
- Not enough bandwidth uplink on ATM backhaul (i.e. from TGU to RNC). - Not enough bandwidth downlink on ATM backhaul (from RNC to TGU).
- Not enough processing/routing capacity in TGU. 7.2.3.2 Inter-working release request/acknowledge/reject
The client (NBU) will use the "inter-working release request" message to request release of an established inter-working connection. Supplied in the request will be PHY, VPI, VCI, CID, transaction id, uplink/downlink UDP port and uplink/downlink IP address. The server (TGU) will release the connection between AAL2 SSSAR CID and UDP/IP. If operation is successful the TGU will respond with a "inter-working release acknowledge" message. The message the parameters supplied in the request.
If for some reason an inter- working can not be released the TGU shall respond with an "inter- working release reject" message. The message will include a fault code value and the parameters supplied in the request.
7.2.3.3 Inter-working reset request/acknowledge/reject
The client (NBU) will use the "inter-working reset request" message to request release of all established inter- working connection. The server (TGU) will release all connection between AAL2 SSSAR CID and UDP/IP. If operation is successful the TGU will respond with a "inter- working reset acknowledge" message.
If for some reason the inter- working connections can not be released the TGU shall respond with an "inter-working reset reject" message. The message will include a fault code value.
7.2.3.4 Measurements, fault reports etc
The Node B's connected to BLAN may need to monitor operation of TGU, in particular if RNC is not aware about presence of the TGU in path between RNC and each Node B. For this reason will the proprietary Interwork between each Node B and TGU enable the Node B to:
- order collection of measurements, e.g. current load on ATM backhaul from TGU including both traffic intended for this particular Node B and total load on the TGU (aggregated for all Node B using this TGU),
- get status indications, including e.g. fault reports and alarms. 7.3 NBU O&M signaling (Cub)
An embodiment for NBU O&M signaling is illustrated in Fig. 9. The NBU O&M signaling interface is transported over IPoA, one channel per NBU. The TGU shall forward between this interface and IP/Ethernet. IPoA shall be implemented according to RFC1483 (LLC/SNAP encapsulation) and RFC1577 (Classical IP and ARP over ATM). Preferably the MTU for IPoA should be configured to avoid fragmentation between the ATM and Ethernet interface. Control plane routing is preferably performed as described in chapter 7.5.
7.4 TGU O&M signaling (Tub) An embodiment for external O&M signaling is illustrated in Fig. 10. The
TGU remote O&M signaling interface is transported over IPoA. The TGU shall terminate this interface. For the TGU O&M applications SNMP and FTP shall be supported.
7.5 Control plane routing For control plane signals the routing between the network side and BLAN shall be based on VP/VC on the network side and IP address + port number on the BLAN side. Protocol conversion shall be performed in the TGU; conversion type shall be remotely configurable for each routed channel. The configuration shall be stored in persistent memory. Fig. 11 illustrates an embodiment of control plane routing, in an example with two control plane channels (CHl and CH2).
7.6 User data signaling
For user plane signaling the TGU shall map the Frame Protocol between AAL2 SSSAR frames and UDP packets. User plane routing shall be performed as described in chapter 7.7.
Fig. 12 illustrates an embodiment of user plane signaling, where FP equals what is specified by 3GPP in specifications TS25.427 and TS25.435. 7.7 User plane routing
For the user plane the routing between the network side and BLAN shall be based on VP/VC+CID on the network side and IP address + UDP port on the BLAN side. Routing of individual data channels shall be dynamically configured with the proprietary protocol TISP.
Fig. 13 illustrates an embodiment of user plane routing, in an example with three user plane channels (CHl, CH2, CH3).
8 Implementation aspects
8.1 OneBASE Pico Node B Unit (NBU)
8.1.1 General
The existing OneBASE Pico Node B unit (NBU) is complete 3GPP Node B supporting traffic, base band and radio for one UMTS FDD carrier and one cell. Basic functions, performance and layout of the OneBASE Node B is outlined in WO2005/094102, the content of which incorporated herein by reference. In the roadmap for Andrew development is also a Micro Node B unit with up to two carriers; This would be based on the same source system (HW, FW and SW) and solutions for Pico Node B is transferable directly to the Micro Node B.
Inside the NBU the termination of physical backhaul has been placed on a separate circuit board ("the transmission board", se also the referenced document WO2005/094102), making it possible to deliver NBUs for different physical backhauls with minor changes on hardware and application software. Each NBU also has a Ethernet port (10/100 BaseT) which currently only is used for local on- site maintenance. Current version of OneBASE and its application software is designed for ATM based backhauls and transmission boards for 2xEl (with IMA) and STM-I are available today.
8.1.2 Implementation
The existing NBU will be able to support IP transport using Ethernet on existing hardware. It would also be possible to design software allowing to mix IP transport and ATM transport on the same unit. The following S W/FW modifications would be needed for first release:
- New software for adjusting the internal frequency reference via a timeserver.
- New firmware for handling of user data over UDP instead of AAL2. - New software for setup and release individual IP connections, including communication with TGU.
For the first releases xDSL communication could be handled using an external xDSL modem connected to the Ethernet port of the OneBASE Pico Node B unit. For later revisions of hardware (i.e. new transmission boards) it would also be possible to terminate xDSL inside the OneBASE Pico Node B unit, thus eliminating the need for external units.
8.2 OneBASE Transmission Gateway Unit (TGU)
8.2.1 General
The BLAN proposed in this documents will require a new dedicated unit responsible for translation between the ATM based protocols used by existing RNCs and the IP based protocols to be used for the BLAN. For this we propose to design a Transmission Gateway Unit (TGU), as previously outlined. This new unit will act as the interface between the RAN transmission network (ATM based) and the internal communication standards used to build the local transmission network (the IP based BLAN).
The OneBASE TGU will act as an IP - ATM inter- working unit. It will enable a Node B using the IP transport option according to 3GPP release 5 to communicate with a RNC only supporting the ATM transport option.
The TGU will also act as a converter between physical interfaces, providing conversion between ATM traffic over E1/J1/T1/STM1 to/from the RNC to IP traffic over Ethernet to/from Node B. In the preferred solution a proprietary protocol will be used for exchanging routing information etc between TGU and Node B/Node Bs.
An alternative solution may use standardized IP-ALCAP (ITU Q2631.1) to dynamically establish, modify and release individual IP connections towards the Node B. In this case the interwork between ALCAP and IP-ALCAP will be done according to ITU specification Q2632.1. Since the control of transport bearers between the RNC and several Node Bs could be performed by the TGU, it might also be possible to concentrate traffic in an "intelligent" way thus saving transmission cost for the interface between the RNC and a remotely located TGU. This particularly important if transmission cost between RNC and TGU is significant (e.g. a remote star configuration)
8.2.2 Requirements
8.2.2.1 Operation and Maintenance It should be possible to remotely manage and upgrade the TGU.
The TGU should have a persistent memory for storage of application programs and configuration data.
For rollback purposes there should be space for at least two versions of application programs and configuration data. There should be Fault Management for supervision of connections and hardware.
There should be Configuration management functions for defining ATM addresses etc.
There should be Performance management for collecting traffic statistics as e.g. load on links both on BLAN and ATM side
8.2.2.2 Admission control
The TGU should continuously monitor the actual load on the ATM backhaul both:
- for individual transport bearers (AAL2 transport bearers), - individual PVC (to compare with configured max peak cell rate PCR etc),
- individual VP,
- aggregated total load on ATM backhaul.
This should be done both for uplink (from TGU towards RNC) and downlink (from RNC towards TGU). If different AAL2 transport bearers or PVC or VP have been assigned different priorities by RNC and/or Node B, then TGU should continuously monitor load also per priority level and/or prioritized item. Before accepting a setup request (from a Node B or RNC) of a new transport bearer the TGU must check that requested bandwidth up and downlink is available on the ATM backhaul connection to the TGU, i.e. that "current load" + "new request" < "max allowed load", where "current load" could be be calculated as e.g.
- current measured total load (consumed bandwidth) on the links,
- current reserved load, i.e. sum of "requested bandwidth" for all activated transport bearers,
- "worst" of current measured and current reserved. The "current load" and/or "max allowed load" can be defined as either:
- per VP
- per PVC - per Priority level.
This kind of "admission control" for new transport bearers also applies when reconfigurating a transport bearer, e.g. when RNC request the reserved bandwidth should be increased for a particular transport bearer. In some cases admission control can be completely or partly disabled, e.g. when TGU is located close to the RNC with virtually unlimited bandwidth at no cost.
8.2.2.3 Time server
As an option the TGU can be equipped with a time server using a very stable high quality reference oscillator inside the TGU. The Node B in BLAN can then use this time server in a similar way as a NTP server on internet, but since this server is located on the BLAN the jitter in the IP network (i.e. BLAN) will be significantly smaller than for an NTP server on the public internet.
9 Iub aggregation
Using IP transport for Iub connection of base stations (Node B) reduces the cost for the transmission backhaul significantly. The trend is also that base stations becomes smaller and smaller making it simpler to find suitable sites and to deploy them.
In particular for very small base stations like a single cell carrier Pico Node B it may be possible to deploy large volumes, in particular when it is possible to connect them via a public IP network.
However, existing RNCs are not designed to cope with this large amount of separate single cell base stations; Instead they are designed to handle fewer but large base stations where each base station has a few control ports terminating NBAP but handles a number a number of sectors each with a number of RP carriers, e.g. a 6 sector x 3 carrier.
To reduce the load of the RNC the TGU can therefore modified to also perform an aggregation of Iub, making a number of single cell carrier Node B to be regarded by the RNC as one larger several sector Node B.
In this case the TGU will need at least to terminate the common NBAP (C- NBAP, also called "Node B control port") and ALCAP (if used).
The dedicated NBAP (D-NBAP, also called Communication control port) may either be terminated on the TGU, or may be forwarded to the Node B unit handling a particular radio link. The problem with such a solution would be that if a radio link (DPCH) moves from one Node B unit to another (a handover) then control of the radio link should also be moved. However, 3GPP has foreseen this kind of problem and procedures for this move of communication control port are already defined in the standard.
If/when a Ue enters a new cell, then the TGU can then use these already defined procedures to change communication control port (i.e. switching control flow to another Node B for a particular Ue)
At a radio link setup request from the RNC (On C-NBAP) the TGU needs to decide which Node B handles that particular cell and then forward a radio link setup message to that cell. Either this forwarded message can be an identical copy of original message or it can be some kind of proprietary message. Li a first basic implementation the TGU would only terminate the C-NBAP and forward all other control and user plane signaling directly to Node B units handling the connection at that particular moment time. In such case the routing inside the TGU will need to route different CID to/from the same VPI-VCI to different Node B, i.e. different IP addresses.
In a further development of the TGU, the TGU may also perform so called "softer handover", i.e.
- TGU needs to split downlink data flow for some Ue connections to get more than one Node B unit to transmit this into the air interface,
- TGU needs to receive uplink data for the same Ue connections from several Node B and the combine this flows (e.g. by selection combining on FP packet level) to create one uplink flow to RNC for each Ue.
In this later development, the TGU may also terminate both C- and D- NBAP, or it may choose to itself completely administrate the "communication context" (3GPP terminology) for e.g. radiolinks (=Ue) in "softer handover" while distributing responsibility for handling communication contexts only residing on a single cell to that particular Node B unit handling the cell for the Ue is residing. Also in this case it may (as described above) be necessary to change "communication control port" if Ue moves in and out of "softer handover"
If the TGU hardware has enough processing capacity then the TGU can in this way also emulate more than one "several sector Node B", i.e. terminating C- NBAP and ATM etc for more than one "cluster of Node B units". In this case the RNC via TGU would be seeing one "several sector Node B" per cluster. The above described concept of a "TGU" acting as an Iub aggregator displaying one or more "several sector Node B" instead of a cluster of Node B units can also prove to be very useful even when the RNC itself can terminate Iub over IP. In such an implementation the TGU would have IP interface both to RNC and to the Node B's. A number of features related to different embodiments have been described in detail above, features which may be combined in many ways. Though not explicitly mentioned above, it should be understood that typically both the TGU and the NBU include computer systems comprising microprocessors with associated memory space and operation software, as well as application software configured such that execution of computer code of the application software causes the computer systems to carry out the steps mentioned herein.

Claims

1. A communications system comprising: a Radio Network Controller (RNC) connected in an (Asynchronous Transfer Mode (ATM) network; a Node B unit (NBU) radio base station connected in an IP network; a Transmission Gateway Unit (TGU) connected to the IP network and to the ATM network, configured to change transport bearer of data packets transmitted between the RNC and the NBU.
2. The communications system of claim 1, comprising: a plurality of NBUs connected in the IP network to form a Node B Local Area Network (BLAN), which BLAN is connected to the ATM network via the TGU.
3. The communications system of claim 2, wherein the TGU is configured to distribute traffic between different NBUs in the BLAN.
4. The communications system of claim 1, wherein the TGU is configured to transport circuit switched traffic to and from the NBU using ATM links, and to transport packet switched user data to and from the NBU using IP networks.
5. The communications system of claim 1 , wherein the NBU has a first physical communication ports for IP connection, and a second physical communication ports for ATM connection.
6. The communications system of claim 2, wherein each NBU has a first physical communication ports for IP connection to the BLAN, and a second physical communication ports for ATM connection to the RNC over an ATM backhaul.
7. The communications system of claim 1, wherein the NBU comprises an internal reference oscillator for frequency reference.
8. The communications system of claim 7, wherein the NBU is communicatively connected to an external clock reference, and configured to synchronize the internal reference oscillator to the external clock reference.
9. The communications system of claim 8, wherein the external clock reference is a time server located in the TGU.
10. The communications system of claim 9, wherein the TGU is connected to the NBU by dedicated lines.
11. The communications system of claim 9, wherein the TGU is configured to prioritize timing messages transmitted from the internal time server to the NBU over the IP network.
12. The communications system of claim 8, wherein the NBU is configured to send a new synchronization request to the external clock reference with a period T, where T is equal to or less than 10 ms.
13. The communications system of claim 12, where T is more than 100 ms.
14. The communications system of claim 8, wherein the external clock reference comprises a GPS receiver.
15. The communications system of claim 1, wherein the TGU is configured to terminate ALCAP connection from the RNC.
16. The communications system of claim 15, wherein the TGU is configured to use content of received and terminated ALCAP messages to configure ATM transport bearers and select IP connections to the NBU.
17. The communications system of claim 15, wherein the TGU is configured to use content of received and terminated ALCAP messages to perform admission control for the ATM network and/or the IP network.
18. The communications system of claim I5 wherein the NBU is configured to terminate ALCAP connection from the RNC, and to send a message back to the TGU to set up a transport bearer.
19. The communications system of claim 18, wherein the NBU is configured to request the TGU to create a mapping between a CID and a UDP port.
20. The communications system of claim 1, wherein an AAL2 transport bearer of the ATM network is assigned to a VCC with a given ATM service class corresponding to a priority level, and wherein the TGU is configured to read the VCC of a downlink data stream, and assign a corresponding priority to the IP network side.
21. The communications system of claim I5 wherein an AAL2 transport bearer of the ATM network is assigned to a VCC with a given ATM service class corresponding to a priority level, and wherein the NBU is configured to acquire information of the VCC to be used on the ATM side and to assign a corresponding priority to the IP network side for an uplink data stream.
22. The communications system of claim I5 wherein the TGU and/or the NBU is configured to assign higher priority to circuit switched data than to packet switched data.
23. The communications system of claim I5 wherein the NBU includes a first clock and the TGU includes a second clock, wherein the TGU is configured to time stamp a message with a reading of the second clock prior to sending the message to the NBU, and wherein the NBU is configured to time stamp the same message upon receipt with a reading of the first clock, wherein the NBU is configured to measure the delay in the IP network based on the time stamps.
24. The communications system of claim 23, wherein the TGU is configured to send the message with a preset priority level, and the NBU is configured to measure the delay for that priority level.
25. The communications system of claim 1, wherein the NBU includes a first clock and the TGU includes a second clock, wherein the NBU is configured to time stamp a message with a reading of the first clock prior to sending the message to the TGU, and wherein the TGU is configured to time stamp the same message upon receipt with a reading of the second clock, wherein the TGU is configured to measure the delay in the IP network based on the time stamps.
26. The communications system of claim 25, wherein the NBU is configured to send the message with a preset priority level, and the TGU is configured to measure the delay for that priority level.
27. The communications system of claim 23 or 25, wherein the time stamp is introduced on top of UDP-IP.
28. The communications system of claim 1, including a mechanism for detecting lost IP packets in a downlink direction in the IP network, wherein the TGU comprises a counter configured to continuously count the number of blocks sent and, wherein the NBU comprises a counter configured to continuously count the number of blocks received, and wherein the TGU is configured to periodically send a status message to the NBU including a measurement of the number of blocks sent.
29. The communications system of claim 1, including a mechanism for detecting lost IP packets in an uplink direction in the IP network, wherein the NBU comprises a counter configured to continuously count the number of blocks sent and, wherein the TGU comprises a counter configured to continuously count the number of blocks received, and wherein the NBU is configured to periodically send a status message to the TGU including a measurement of the number of blocks sent.
30. The communications system of claim 27 or 28, wherein the NBU and the TGU hold separate counters for different priority levels assigned to transmitted
IP packets.
31. The communications system of claim 28 or 29, wherein the measurement of the number of blocks sent is introduced on top of UDP-IP.
32. The communications system of claim 1, wherein the RNC is configured to put NBAP control plane signaling on a dedicated AAL 5 permanent virtual connection (PVC) using SSCOP.
33. The communications system of claim 32, wherein the TGU is configured to translate between SSCOP and SCTP protocols for control plane signaling, and the NBU is configured to terminate NBAP.
34. The communications system of claim 32, wherein the TGU is configured to use SSCOP over UDP/IP for NBAP, and to map AAL 5 frames to UDP frames, and wherein the NBU is configured to terminate NBAP.
35. The communications system of claim 1, wherein the RNC is configured to use ALCAP to control transport bearer allocation, and to put ALCAP on a dedicated AAL5 PVC using SSCOP, and wherein the NBU is configured to terminate ALCAP.
36. The Communications system of claim 35, wherein the TGU is configured to map AAL 5 frames to UDP frames.
37. The communications system of claim 1, wherein the TGU is configured to terminate ALCAP from the RNC5 and to use IP-ALCAP to the NBU.
38. The communications system of claim I5 wherein the TGU is configured to act as a server and the NBU is configured to act as a client in a TGU inter- working protocol on UDP or TCP.
39. The communications system of claim 1, wherein the NBU is configured to - 10 wherein the IWU is configured with a static mapping between VPI-VCI-CID and UDP/IP port.
40. The communications system of claim 1, wherein the NBU is configured to increase a time of arrival window.
41. The communications system of claim 1, wherein the TGU and the NBU are configured to perform FP packing and send a plurality of FP frames in the same
IP datagram.
42. The communications system of claim I5 wherein the TGU is configured to terminate NBAP and create a virtual several sector as seen from the RNC.
43. The communications system of claim 1, wherein the TGU and/or the NBU are configured to perform admission control to deny new calls if either ATM or IP is overloaded/congested.
44. The communications system of claim I5 wherein the TGU and/or the NBU is configured to act as a Diffserv code point and perform admission control to prevent breaking contract for each Diffserv priority level.
45. The communications system of claim 1, wherein the TGU and/or the NBU is configured to resolve congestion/overload on transport network by either selectively discarding data and/or releasing calls.
46. The communications system of claim 1, wherein the TGU and/or the NBU is configured to select which data to route on ATM and which to route on IP on the Iub between NBU and the TGU.
47. A Transmission Gateway Unit (TGU), configured in accordance with any of the preceding claims.
48. A Node B Unit (NBU) radio base station, configured in accordance with any of the preceding claims.
PCT/SE2006/000565 2005-05-13 2006-05-15 Transmission gateway unit for pico node b Ceased WO2006121399A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US59486205P 2005-05-13 2005-05-13
US60/594,862 2005-05-13
US76684906P 2006-02-15 2006-02-15
US60/766,849 2006-02-15

Publications (2)

Publication Number Publication Date
WO2006121399A2 true WO2006121399A2 (en) 2006-11-16
WO2006121399A3 WO2006121399A3 (en) 2007-01-04

Family

ID=37396996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2006/000565 Ceased WO2006121399A2 (en) 2005-05-13 2006-05-15 Transmission gateway unit for pico node b

Country Status (1)

Country Link
WO (1) WO2006121399A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834805A (en) * 2010-05-31 2010-09-15 西南交通大学 A method for flow control transmission protocol message traversing network address translation equipment
CN101953225A (en) * 2008-02-22 2011-01-19 高通股份有限公司 The method and apparatus that the transmission of base station is controlled
EP2553971A4 (en) * 2010-03-30 2013-08-07 Ericsson Telefon Ab L M METHOD FOR DETECTING CONGESTION IN A RADIOCELLULAR SYSTEM

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100414669B1 (en) * 2001-10-29 2004-01-13 삼성전자주식회사 Data translation apparatus of atm in mobile communication
AU2003259847A1 (en) * 2002-08-14 2004-03-03 Qualcomm Incorporated Core network interoperability in a pico cell system
US20050043030A1 (en) * 2003-08-22 2005-02-24 Mojtaba Shariat Wireless communications system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101953225A (en) * 2008-02-22 2011-01-19 高通股份有限公司 The method and apparatus that the transmission of base station is controlled
KR101124822B1 (en) 2008-02-22 2012-03-26 콸콤 인코포레이티드 Methods and apparatus for controlling transmission of a base station
RU2496279C2 (en) * 2008-02-22 2013-10-20 Квэлкомм Инкорпорейтед Method and apparatus for controlling base station transmission
CN101953225B (en) * 2008-02-22 2015-03-11 高通股份有限公司 Method and device for controlling transmission of a base station
US11477721B2 (en) 2008-02-22 2022-10-18 Qualcomm Incorporated Methods and apparatus for controlling transmission of a base station
EP2553971A4 (en) * 2010-03-30 2013-08-07 Ericsson Telefon Ab L M METHOD FOR DETECTING CONGESTION IN A RADIOCELLULAR SYSTEM
US8908524B2 (en) 2010-03-30 2014-12-09 Telefonaktiebolaget L M Ericsson (Publ) Method of congestion detection in a cellular radio system
CN101834805A (en) * 2010-05-31 2010-09-15 西南交通大学 A method for flow control transmission protocol message traversing network address translation equipment

Also Published As

Publication number Publication date
WO2006121399A3 (en) 2007-01-04

Similar Documents

Publication Publication Date Title
US11159423B2 (en) Techniques for efficient multipath transmission
Bhattacharjee et al. Time-sensitive networking for 5G fronthaul networks
NO326391B1 (en) Procedure for transmitting data in GPRS
WO2008089660A1 (en) Method, device and radio network system of unifying radio accessing
EP1256213B1 (en) Method and system for communicating data between a mobile and packet switched communications architecture
EP1234459B1 (en) System and method in a gprs network for interfacing a base station system with a serving gprs support node
US20060198336A1 (en) Deployment of different physical layer protocols in a radio access network
US8619811B2 (en) Apparatus, system and method for forwarding user plane data
WO2006121399A2 (en) Transmission gateway unit for pico node b
Salmelin et al. Mobile backhaul
EP1980056B1 (en) Jitter management for packet data network backhaul of call data
CN1969514B (en) Information transmission in a communications system
KR101123068B1 (en) Access to cdma/umts services over a wlan access point, using a gateway node between the wlan access point and the service providing network
US11251988B2 (en) Aggregating bandwidth across a wireless link and a wireline link
Lilius et al. Planning and optimizing mobile backhaul for LTE
Zhang et al. Routing and packet scheduling in LORAWANs-EPC integration network
CN101932003A (en) Method and equipment for processing congestion control
EP2144477A2 (en) Method and system for pooled 2g-3g single transmission
Fendick et al. The PacketStar™ 6400 IP switch—An IP switch for the converged network
US20050232235A1 (en) Method and system for providing an interface between switching equipment and 2G wireless interworking function
Li et al. Carrier Ethernet for transport in UMTS radio access network: Ethernet backhaul evolution
Parikh et al. TDM services over IP networks
Metsälä 3GPP Mobile Systems
Deiß et al. QoS
Wylie-Green et al. LTE Measurements

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06733409

Country of ref document: EP

Kind code of ref document: A2