TV News Studio for IBC 2019

Media IP Networks’ Rupert Kelly was appointed Technical Manager for the design and delivery of the TV News studio facilities in the core of the vast International Broadcast Conference, held annually at Amsterdam’s RAI exhibition complex. The TV News studio is used continuously during show opening hours for prerecorded interviews, live panel discussions, technology briefings and glitzy product launches.

Immediately on contract, Media IP Networks secured Jeff Laing as Chief Engineer to oversee the system build and train technical operators. By tradition, many junior operators for IBC are students from the UK’s leading broadcast training institutions, Jeff had a distinguished career break as a lecturer for BBC Academy.

In close partnership with Kev Connor and Roy Callow for IBC Resources and Anna Valley’s Jenny Bigrave and Steve Jones, Media IP Networks was directed by the IBC’s Matthew Tompkinson to develop the project to improve on 2018’s success with yet more cutting edge 2110 technogy from EVS and Grass Valley. This required a major rewrite and rationalisation for the documentation to define a plan that could be agreed by all sponsors. The schematic below captured all technical and operational possibilities and offered resilience, efficiency and performance.

Schematic showing wrap-around SMPTE2110 S-COREMaster orchestrator (in yellow) hosted on Arista7050 with Embyonix SFPs interfacing to HDSDI devices (blue) and backup Sirius matrix (green)

Previous years had seen the migration from a straightforward 5x camera HD SDI installation with discrete analogue and AES digital audio processing for a single studio floor up to a hybrid SMPE2110 / HDSDI design with several major manufacturers offering exhilerating new SMPTE2110 products and techniques, dedicated galleries and commercial wet hire. Several PTZ remote control cameras and a 2nd green screen set were added in 2018 and further upgraded with augmented reality (by Pixotope) and simultaneous recording in 2019.

As a main sponsor, EVS provided not only their legendary XT media servers, mostly configured for SMPTE2110, but also the ubiquitous Arista switches, their ‘S-core Master’ SDN control layer and upwards of 100 Emryonix HDSDI<->SMPTE2110 SFP+ converters. Fortunately EVS also brought several of their very best engineers to IBC, Jeff and I were well supported in all of the design and build stages in Liege (at EVS’ fabulous new facility – wow!) and on show during the rig and transmission days. Special thanks to Johan Engelen for his patient determination, good humour and brilliance.

IBC TV News’s very public studio floor.

Grass Valley Group provided the regular LXD86XF ‘World’ cameras channels with SMPTE2110 adaptors in their XCUs and K-Frame vision mixer crates populated with 2110 cards. GVG were also very supportive with access to their top engineer at the IBC, Geert de Neve, to iron out a few timing and sync errors when the 2110 system had been commissioned.

Additional HDSDI equipment such as a 128×128 Snell Sirius matrix, Evertz synchronizers and multiviewers, HD glue etc was provided by Gearhouse. Pixotope ingeniously turned the green screen, lit precisely by Biff, into a truely amazing Augemented Reality demonstration, popular as a stylised commercial product presentation facility.

Grass Valley provided both vision mixer and 5 full camera chains with full SMPTE2110 delivery paths which seamlessly integrated with the Embryonix HDSDI adaptors in the Arista switch frames under the S-COREMaster control palne..

An Andiamo MADI converter was used to interface with the sounds desk to feed AES into the ingest XTs. A Clearcom intercom system was deployed across all galleries, the floor and remotely to other IBC areas over the SM fibre-connected Baumann multiplexor.

TV News crew 2019. Special shout out to Neil McLeod and Gillian Kelly

Korea Olympics 2018 for NEP and Discovery

Featured

Media IP engineers fully designed, commissioned and maintained the resilient, scalable, high performance IP internetwork to connect all Discovery’s HD video, audio, communication and file-based resources within the International Broadcast Centre and to and across 13 games venues and Eurosport client sites for the Winter Olympics, PyeongChang, Korea in January and February 2018.

The IP network was fully operational and integrated with

  • Live HDSDI (200+ paths) and file-based broadcast content delivery, intercoms and equipment control across 4x Eurosport IBC studios and 13 venues.
  • Intercontinental Transmission links to Discovery’s Eurosport clients and libraries.
  • MediaGrid and venue-based ingest services provided by EVS.
  • IBC distribution and venue information services provided by OBS.

Venue-to-IBC IP infrastructure within Korea was provided by Olympic Broadcast Services (OBS) to all venues. Diverse managed trunks carried all broadcast, corporate ‘user’, streaming and control and coordination traffic.

Intercontinental transfer and transmission path services as well as WAN IP infrastructure for over 70+ HD feeds to European-based content presentation, transmission and archive facilities and 48 broadcast ‘markets’ was delviered by Globecast and tightly integrated with NEP’s LANs in the PyeongChang IBC.

Performance required:

  • In excess of 99.999% uptime (‘5x9s’)
  • Clearly prioritize IBC <-> Venue traffic types in the following QoS hierarchy:
    • IP system management (routing protocols etc)
    • Realtime high value streaming media traffic (2022-6/7, AoIP)
    • SSH/Telnet configuration and monitoring
    • Non-broadcast streaming (IPTV)
    • Equipment control (SDI Matrix, RCPs, SIP, GUI configurators)
    • Content file transfer (EVS Funnell)
    • Network statistics gathering (SNMP and Netflow)
    • Web browsing
    • Discovery ‘corporate staff user’ traffic
  • Be simple to monitor and troubleshoot.
  • Utilise all physical layer resources to provide BOTH resilience and load balancing.
  • Use protocols and methods that are proven and understood.
  • Integrate cleanly and efficiently with both the local LANs at each venue and the intercontinental WAN over to Discovery’s principal European transmission networks.

The IP architecture was based around proven and well understood collapsed core topology as below. Two powerful Arista 1750 L3 switches created the resilient switched core with full L3 routing across all the 90+ IBC LAN subnets although only 6 were directly connected to each and super-netting was used to avoid unwieldly routing tables. A further distribution layer of four 7150s downlink to the 13 connected venues, two each on both the A and B venue link sides as provided by Olympic Broadcast services. These 4 critical switches were interconnected in a grid topology using both link aggregation (as MLAG) and OSPFv3 equal path load balancing. No instability was detected by packet loss or path jitter during all of installation and operational use.

Final layer 3 IP internetwork design as built and operational for 5 weeks with 100pc uptime and informational syslog only.

Supernetting has widely been implemented at the core and distribution levels. A few static routes were entered to avoid more complex link state calculations of the core switches and avoid ‘rogue’ configs pulling down the core’s routing table.

OSPFv3 is an advanced link-state routing algorithm and when configured with short LSA timers and dynamic, near-equal path costs provided proportional load balancing as well as full, automatic failover resilience. This proven technique quietly doubled WAN capacity using fully implemented backup paths to effectively deliver 2x1Gbps (or 2x10Gbps) provision at L3 although resilience then fell back on effective QoS for critical traffic if a link is compromised and traffic levels were in excess of a single link.

All of the remote terminations (venues) at the end of each spoke followed the exact same design with custom changes made for IP addressing and additional services/subnets as required. All remote subnets were gatewayed through resilient, load balancing and QoS’ing Arista 7150 L3 switches with very similar and proven configurations supporting OSPF, MLAG and VARP for all remote services, whether implemented or not. So as well as direct access uplinks for the 34+ venue LAWO Rem4s codecs, they were the principal distribution to secondary access switches for non-2022 VLANs such as VSM, Ravenna, Ingest, Riedel (Cisco, Netgear). 

Layer 2 schematic for a sample venue, 1 of 13, showing interlocking horseshoe VLANs to avoid STP instability and enable equal path OSPF load balancing. Also the direct HDSDI codec VLANs in red, both LAWOs at each end on same subnet to permit SAP, to avoid L3 routing complications.

Across the whole estate was a strict condition of each and every subnet existing on a single VLAN with a single gateway. Per VLAN Spanning tree (PVST) is deployed in every VLAN with no STP performance degradation since each VLAN was limited to a maximum of 4 switches with the exception of corporate end-end traffic. Portfast was universally applied and no STP instability was seen across syslogs from any switch even with MLAG interlinks. There were no VLAN loops and Spanning Tree was NOT used to provide any failover mechanism. VLAN trunking was deployed throughout with central, simple supernetted L3 IP routing in the collapsed core providing a stable internetwork, working end-end at line speed for all services including EVS, Graphics and operational TV surfaces.

Snapshot from Solarwinds NMS in the MCR/NOC illustrating resilient 30+ device MLAG switching fabric across the IBC estate distributing all content and control within secure, monitoried VLANs.

Venue to venue traffic was fully implemented including 12+ broadcast HD camera feeds between the Ice Hockey venues as IP JPEG2000 streams over the protected XYZ/OBS infrastructure as required, notwithstanding security issues. Similarly, all broadcast IP sources at every venue were fully available at all IBC and Europe-wide transmission centres.

Snapshot of Solarwinds NMS monitoring in MCR and NOC for EVS connectivity end-end across estate and VMware RDP hosts within IBC ESX server.

Media IP networks commissioned a VMware ESXi6.7 server and populated with 10x Windows 10 VMs. A working, secure and reliable Windows10 VM with the VSM application was trialled and replicated 11 times to create the battery. Remote broadcasters in Europe and at the venues used regular MS Terminal Services clients (MS native, Mobexterm, VNC) to remotely control a strictly allocated and firewalled VM instance to select their IBC sources to their WAN links for Tx. Confidence was high that these VMs were reliable and effective. Apparently these VMs have been used to change sources on air from the broadcaster’s galleries in Europe

Snapshot of Solarwinds NMS in Media IP Networks’ NOC monitoring LAWO codecs at venues and in the IBC in realtime for bandwidth, jitter and packet loss.