Chipmaker MediaTek has denied a report that it is coooperating with Google to work on its in-house developed Tensor Processing Unit (TPUs).
The report from Taiwan-based media Economic Daily, which MediaTek calls "false", claims that the new TPU will be manufactured on TSMC's 5nm process, and is expected to enter tape-out by the end of 2023, with volume production anticipated in in 2024. Citing sources, Economic Daily claims that MediaTek will provide its serializer/deserializer (SerDes) solution to Google.
Commonly used in high speed communications, SerDes is a critical component in IC design that enables the transmission of high-speed digital data between different subsystems or devices. It is commonly used in high speed communications.
Google's TPUs are custom-developed ASICs that function as a matrix processor specialized for neural network workloads. In a research paper published in April, Google claimed that it had bult a system with over 4,000 TPUs joined with custom components designed to run and train AI models. The system, according to Google, was used to train Google's Pathways Language Model( PaLM model) - a competitor to OpenAI's GPT model. In the same research, Google claimed that its TPU-based supercomputer, TPU v4, is 1.2x–1.7x faster and uses 1.3x–1.9x less power than the Nvidia A100. Nevertheless, the results were not compared with H100 - Nvdia's latest AI chip.
Alongside cloud-based TPUs, Google has also been working on Edge TPU to accelerate machine learning (ML) inferencing on low-power devices. According to Google, an individual Edge TPU can perform 4 trillion operations per second (4 TOPS), using only 2 watts of power.
In a previous interview with DIGITIMES Asia, Finbarr Moynihan, head of Corporate Marketing at MediaTek, indicated that edge AI applications will likely shape all aspects of MediaTek business. "For sure most, if not all, of our SoC solutions for smartphones, digital TV, automobile, industrial and IoT, smart speakers & displays, Chromebooks, tablets and smart home will continue to integrate more and more capabilities for AI applications at the edge," he said.