The chipmaker's two-for-one AI and virtual RAN deal is unlikely to make sense to operators outside the most densely populated and fiber-rich cities.

Iain Morris, International Editor

June 5, 2023

5 Min Read
Nvidia CEO Jensen Huang has been one of the main beneficiaries of the AI boom. (Source: Nvidia)
Nvidia CEO Jensen Huang has been one of the main beneficiaries of the AI boom.(Source: Nvidia)

Nvidia has been sizzling like a red-hot chip pan on the greasiest high street in Britain. Carrying a stock-market value of roughly $350 billion when the year started, it briefly joined the trillion-dollar club in late May and had a market cap of more than $970 billion at the start of this week. Investors have swallowed the story that companies will buy the chipmaker's graphical processing units (GPUs) to run a gamut of artificial intelligence (AI) applications, and just mentioning AI these days is enough to add a few billion dollars to your market cap.

But Nvidia's GPU pitch to operators is seriously flawed, according to multiple sources with a telco background who preferred to remain anonymous. The basic idea is that a telco could use these GPUs not only for AI but also to build a virtual radio access network (vRAN), one that runs software on general-purpose processors shared with other workloads, all managed by the same cloud-native technologies. Important sections of the industry are unconvinced.

This two-for-the-price-of-one deal – AI with vRAN included like free mushy peas – is needed to justify the investment. GPUs are such a pricey and power-hungry type of chip that using them for vRAN alone would probably not make economic sense. Even Nvidia acknowledged this when asked about putting a GPU at an individual cell site to support RAN software but no other applications. "There are cheaper ways to implement that," said Ronnie Vashista, Nvidia's senior vice president of telecom, on a call with reporters last week.

Just as hard to envisage, however, is that an operator would buy a GPU to host AI applications at an individual site. The biggest customers for Nvidia are clearly the Internet giants, which have been installing GPUs in their massive data centers to handle applications like ChatGPT, the one responsible for most of the recent AI hype. Telcos and other businesses that invest in GPUs for AI will probably do likewise, putting them mainly in centralized facilities.

This is evidently how Nvidia sees its GPUs being deployed. "We're talking to many telcos today where they have power, they have racks in those sites, and they want to leverage those sites more cost efficiently for other applications as well or more pooling of RAN," said Vashista. "That model is becoming more and more prevalent to run more computational pooling of RAN sites and RAN traffic."

Pooling problems

Unfortunately, this pooling can only go so far when it comes to the RAN. In most of today's networks, the baseband and other computing is done at the radio site. Virtualization makes it easier to pool central unit (CU) functions not sensitive to latency, a measure of the journey time for data signals. Tommi Uitto, the head of Nokia's mobile business group, reckons a telco in country of the UK's size – where market leader BT maintains about 19,000 sites – could feasibly bring these CUs into about 100 facilities. But he also thinks the distributed units (DUs) hosting the latency-sensitive baseband will largely remain at sites.

"The DU must be no more than 20 kilometers from the cell site because it has real-time-sensitive functions," Uitto told Light Reading during a previous interview. "In practice, it might not make sense for anybody to build a new grid of this type of edge sites, and that is why we believe that most DUs will actually reside at the cell site."

Nvidia's share price ($)(Source: Google Finance)(Source: Google Finance)

The exception is likely to be in big cities well served by high-speed fiber networks, he said. Japan's SoftBank, Nvidia's flagship telco customer, told Light Reading it plans to "consolidate the distributed units to the greatest extent possible." But SoftBank operates in a country that is unusually rich in fiber. Elsewhere, telcos would need chips for DUs at individual sites, and that makes Nvidia – by its own admission – look very expensive alongside non-GPU-based rivals.

Competitors that include AMD, Intel, Marvell and Qualcomm are offering flavors of customized silicon based on x86 (in the case of AMD and Intel) and Arm (Marvell and Qualcomm) designs. These various options would support some of or all the computationally demanding baseband (also called Layer 1) functions and relieve pressure on the main central processing unit.

Decisions, decisions

Even SoftBank's deployment of Nvidia GPUs seems unlikely to be very extensive. The operator, which claims a 21% share of Japan's mobile market, had already installed more than 50,000 5G basestations, covering almost 91% of the population, by the end of March. Virtualizing this now would entail stripping out equipment and writing off the expense.

SoftBank has also not yet decided which companies will provide the RAN software that runs on these GPUs, and it has not even guaranteed a role for Aerial, Nvidia's own Layer 1 software product. "We are not able to disclose details, but we are currently in discussions with multiple RAN vendors, as well as discussions on utilizing Nvidia's Aerial," said a spokesperson for the operator via email. Decisions have yet to be taken about the providers of cloud tools, radio units and servers hosting the GPUs, as well.

In the meantime, Intel remains the number-one choice for operators that want to virtualize today and move quickly ahead on rollout. Besides its chip technologies, it offers software, experience and a thriving ecosystem of partners. The absence of competition has generated concern about Intel's dominance. But a mainstream challenge may have to come from someone other than Nvidia.

Related posts:

— Iain Morris, International Editor, Light Reading

Read more about:

AsiaEurope

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like