• About Us
  • Contact

Network and Regulatory Predictions for 2026

Category: News
Published: 28th January 2026

800G, Cloud Repatriation, SMPTE ST 2110, AI and New Supply Chain Regulations

We work closely with leading infrastructure organisations to help them meet their testing, monitoring, and efficiency goals. For many of our clients, the network isn’t just essential – it is their business.

This close collaboration gives us a front-row seat to emerging technologies and the strategies driving them. In this article, we share our five key predictions for 2026, highlighting trends and regulations set to shape networks, cloud, broadcast, AI, and security.

1. Carrier network speeds to accelerate

For the last few years, 100G has been the overwhelming choice for core and transport network backbones. Through our interactions with many Network Operator customers, we’ve helped some of them make the transition from 100G to 400G.

But as many enterprises upgrade their Internet and WAN links to 10G and even 100G, and with more of us upgrading our home connections to 1G to satisfy our 4K streaming, online gaming, and other bandwidth-hungry needs, there will come a point where 400G will struggle to keep up.

Then when you consider the 5G mobile networks carrying traffic for things like VR gaming (that can require multiple Gbps per user), V2X, and IoT, are all underpinned by wireline networks, 400G capacity may struggle sooner than expected.

On a practical level, the upgrades from 100G to 400G often required new line cards, optics, and DWDM hardware. However, recent advances in optical DWDM technology have enabled some 400G infrastructure to be more easily upgraded to 800G.

And we think the combination of this advancement and the ever-growing need for faster networks is going to accelerate the adoption of 800G.

But whilst the migration from 400G to 800G should be less complex than 100G to 400G, the new speeds and the infrastructure will still need to be tested to ensure they perform as expected.

So, if upgrading the network is on your agenda for this year, as we’re already seeing in some operators, please do contact us to discuss your needs for testing to ensure that everything works as it should.

2. Partial cloud repatriation

This time last year we observed many trends, and one in particular we think will continue is the drive to reduce and better predict costs wherever possible.

Since the early 2010’s, a significant number of organisations have migrated most, or even all their workloads to the cloud. Whilst this has proven to be convenient, scalability challenges, simplified price narratives, and a lack of transparency on the costs of moving large datasets out of the cloud, have led some companies to re-think their strategies and bring some workloads home.

Last year, Azure and AWS outages caused major disruption for companies worldwide, reminding us that cloud hosting doesn’t give us control over outage remediation. Whilst bringing workloads on-prem doesn’t guarantee full availability, it still allows you to take control of fixing outages, and this alone is a motivation behind some of the cloud to on-prem migrations we see.

And more recently, we’ve noticed some organisations prefer to build their own AI systems – not just for data control, compliance, and latency, but also to avoid cloud and AI lock-in and keeping costs under control as their needs scale.

Of course, many small enterprises will continue with cloud-first strategies, and large companies will (in the main) keep things like CRM, HR, and Backup systems in the cloud. But we think 2026 will see more homecomings for AI, large databases, and always-on clusters where cloud pricing and outages no longer offer the operational and financial predictability they need.

If you’re considering cloud repatriation this year, talk to us about how we can help you cost-effectively monitor performance, secure workloads, manage infrastructure power, and sustainably cool your equipment.

3. SMPTE ST2110 adoption will rise

Whilst linear viewing is still strong for live TV, IP-based streaming continues to catch up.

As is often the case with technology advancements, the larger players are usually the first to adopt it and reap the rewards.

We saw the use of the ST 2110 standard at World Cup 2022 in Qatar, and of course it’s likely to play its part at World Cup 2026 in the United States, Canada, and Mexico, making technical and creative workflows more flexible and enabling broadcast professionals to manage live production from geographically dispersed facilities, as well as reducing the need for extensive infrastructure at the stadia.

At our ST2110 workshop last autumn, attended by many Engineers from live TV stations, we explored three key areas: the gotchas to avoid when building broadcast IP infrastructure, today’s cyber security best practices, and how to achieve fully remote production without compromising performance.

It came as no surprise that ST2110 is already being used in live broadcast environments here in the UK, with excellent effect.

But more recent developments for how traditionally inflexible production suite equipment can now be further consolidated, in some cases, even down to a web browser on an Apple Studio display – garnered a lot of attention and interaction from our guests.

Advances such as this make remote production even more accessible. And we think the combination of this and a recent new ability to embed machine-readable electronic markers directly into the audio and video signals enabling the comparison of multiple flows carrying the same service, IE: across satellite, SRT and IP, is going to accelerate ST2110 adoption in live TV environments this year.

Of course, linear broadcast isn’t going away any time soon, but we think the combination of another World Cup being underpinned by ST2110, more parts of the workflow being carried out remotely, and continued innovation will accelerate live broadcasting over IP networks.

Please contact us to discuss your requirements.

4. AI preparation will mature

It’s no secret that AI usage continues to grow at explosive rates. For various reasons, mainly around data protection, performance, and cost, we’re seeing more clients opting to build their own internal AI systems.

But any modern application can only be as good as the infrastructure that delivers it. And from an engineering perspective, AI workloads are fundamentally different from traditional enterprise applications.

And if you’re building your own AI systems, you’ll be pleased to know we have identified three common considerations that are often overlooked when companies build their own internal AI systems, along with the measures needed to address them.

Firstly, AI servers use more power than non-AI equivalents. AI inference platforms routinely draw several times more power per node than conventional servers, and large-scale distributed training environments can increase total power demand by orders of magnitude at the workload level.

This creates two challenges: ensuring sufficient power capacity for both current and future deployments, and maintaining the visibility needed to understand real-world consumption.

To address this, we provision intelligent Power Distribution Units. They provide clear visibility of power draw on a per-port basis. This shows exactly how much energy is used by each device for accurately forecasting energy costs. They also provide total control over power usage in that environment, so if a device is added without pre-approval, it can’t be switched on.

The second consideration is thermal management, which is driven by power consumption. Nearly all energy drawn by compute equipment becomes heat, and AI systems generate more thermal load, especially as dense accelerators concentrate it into smaller areas than traditional servers.

A cost-effective and sustainable way of addressing the heat is with water, rather than cold air.

Water-cooled heat exchangers provide cost-effective cooling by connecting on to the rear or side panel of server cabinets and removing the heat generated by the active equipment at source. This doesn’t just save money; it also negates the need for hot/cold aisle containment so you can optimise space and explore other room layout options.

The third consideration, often the most important of the three, is the network. This can be one of the most limiting factors in AI success. Being fast doesn’t always mean ready for AI.

For efficient AI, you need lossless network performance that’s capable of moving masses of east-west traffic with low latency, under sustained load, and over prolonged periods of time.

So, in addition to a well-designed network topology, you’ll need to understand things like throughput, latency, jitter, storage input / output over the network, concurrency, and failure scenarios.

Validating network readiness therefore requires realistic testing. And if this is something you’d like to understand in more detail, you’ll be pleased to know we’ll be exploring strategies for optimising networks for AI success at our Future Networks Live event, 28th April at Green Park, Reading.

Click here for more information and registration.

And if you can’t make the event but want to know more, please contact us to discuss your needs.

5. New regulations for software supply chains

What do phishing attacks and software vulnerabilities have in common? They both have the potential to circumnavigate traditional perimeter defences.

Security Awareness Training and Testing platforms have been around for many years to help prevent staff from being duped by phishing techniques. Most companies use one, and many Cyber Insurers insist on them.

But when it comes to addressing software vulnerabilities, it’s not quite so straightforward.

New software vulnerabilities emerge all the time. Some are contained before harm can be caused, others a la MOVEit and Solarwinds, exploited widely trusted software and caused widespread downstream disruption.

The UK Government’s software code of practice outlines basic expectations for what companies that use software (all of us!), should do to reduce the likelihood and impact of software supply chain attacks, and other software resilience incidents.

But whilst following the guidance is voluntary, announcement is expected in 2026 to enforce security supply chain measures for utility digital service providers, data centres, and managed service providers.

This will come in the form of The Cyber Security and Resilience Bill that will reform and add to the existing Network and Information Systems (NIS) Regulations 2018. The new Bill is expected to embed requirements around software supply chain transparency and vulnerability management, suggesting Software Bill of Materials (SBOM) platforms as a practical mechanism for demonstrating compliance.

And put simply, an SBOM is a tool that compiles an exhaustive list of all the libraries, components, and dependencies that make up a software product – to help manage the risks and security compliances associated with software supply chains.

Whilst UK regulation is not yet in force, the EU Cyber Resilience Act already imposes machine-readable SBOM requirements on manufacturers, importers, and distributors of products with digital elements (including software). And this will of course apply to UK companies selling software and digital products to EU countries even though regulation doesn’t yet apply here.

In summary, if you work for a utility digital service provider, data centre, or managed service provider, you should prepare for a regulatory announcement this year that’s expected to include new software supply chain security measures.

If you work for a company that sells software to EU countries, you’re advised to check that your company complies with the EU Cyber Resilience Act.

If you work for an organisation that uses software, and let’s face it, that’s all of us, there has never been a better time to manage the risks associated with the software you use. And as is often the case, regulations start at the top and eventually work their way down the chain.

For organisations trying to turn these regulatory expectations into practical action, you’ll be pleased to hear we’ll be covering how SBOM platforms serve as a foundation for risk management and compliance at our forthcoming Future Networks Live event, 28th April at Green Park, Reading.

Click here for more information and registration.

Richard Clothier. Senior Product Marketing Manager, Red Helix.

And if you can’t make the event but want to know more, please contact us to discuss your needs.