Computex 2024 is set to be running from June 3-6 in Taipei.
With the advent of the Generative AI era, the technology industry is poised to showcase a diverse array of innovations at Computex 2024. Themed "Connecting AI", this year’s event will unveil the latest applications and technologies in AI development.
DIGITMES Asia provides basic users with unlimited access to toll-free articles, daily news email, and Asia Supply Chain 100 dataset access.
Sign up for free subscription now to continue your reading!
AI PCs were undoubtedly the hottest topic at Computex 2024. Various PC manufacturers showcased related solutions, and major processor players like AMD, Intel, and Arm emphasized the importance of AI PC applications.As one of the earliest companies to implement AI functions on PCs, Synaptics brought even more advanced solutions with Edge AI technologies to the event this year. Their latest innovations build upon its three technology pillars: sensing, processing, and connecting."At the demos you saw, even for things like a PC touchpad, there are techniques such as accidental contact mitigation (ACM), which uses AI algorithms," said Satish Ganesan, Synaptics' SVP & GM of Intelligent Sensing Division and Chief Strategy Officer. "Our goal is to infuse more and more AI features into PCs, smartphones, and automotive at the sensing edge where the user interacts with the device.""The other side is our processing solutions with our newly announced Astra family of processors, where we are uniquely positioned. We are building processors that go after markets wherever computer vision, voice, or any form of edge AI is required, like in smart home, industrial automation, and so on," he added. "Some of the same elements are then used in the PC space where you saw user presence detection techniques which implements vision AI algorithms to enhance overall user experience."In a demo at Computex 2024, Synaptics showed how the intelligent sensing solution with its new Edge AI chip enhances the PC user's experience. The demo focused on practical, everyday use cases in office environments. Features include Human Presence Detection (HPD), head pose tracking, and gesture control, which allow for automatic adjustments such as waking up the computer, locking it, and managing battery and privacy settings.For instance, users can mute their microphone or switch focus between monitors effortlessly using gestures. Integration of vision AI algorithms and PC cameras provides seamless interaction, ensures user privacy and security, saves power, and improves overall productivity by recognizing user identity and preferences, thus illustrating the real-world benefits of AI-enhanced PCs beyond just technological advancements.Ganesan emphasized that many talks about AI PCs from major companies at Computex 2024 focused on the computational power of processors, but "We're not going to compete with those main processors, we're not doing LLM processing or things like that, our value proposition will be in other areas, where we provide chipsets in the touchpad, fingerprint and so on. Of course, we're doing it on Astra for other markets as the main processor."Synaptics Astra was launched at Embedded World with the debut of the SL-Series, a new family of edge AI processors described as "AI-native SoCs" capable of running the Linux and Android OS. The series uses Arm Cortex-A cores, scales from 2 to 8 TOPS performance, and targets multi-modal consumer, enterprise, and industrial IoT workloads. The upcoming SR-Series MCUs with tiered inferencing were also demonstrated. Together with the SL-Series, the MCUs make context-aware computing a reality across the smart home, the factory floor, and at work.According to Synaptics, the SL-Series and SR-Series of Astra SoCs deliver a unified experience combining standards-based open software frameworks, rapid prototyping kits, full-featured AI/ML toolkits, and Synaptics' Wi-Fi and Bluetooth connectivity solutions. The SL-Series is also supported by the Astra Machina Foundation Series development kit, which allows easy and rapid prototyping.Beyond hardware, Synaptics' solutions emphasize optimizing AI algorithms to operate efficiently at the edge, "So we can work with a smaller memory footprint and lower speed processors," Ganesan mentioned. "Right now, the take rate of AI PCs is still pretty small, but I think it is going to be pretty explosive. We focused on AI early, and the biggest thing in AI is about basically the software and the framework." Synaptics' AI software accepts commonly used input frameworks such as TensorFlow, TFLite, TLFM, ONNX, CAFFE, and PyTorch.For clients with varying levels of AI expertise, the company can provide comprehensive support, including the compiler, tools, and resources like predefined models and libraries for different use cases, such as vision and audio. "That is where we will differentiate in the market," said Ganesan. "Most companies only focus on hardware or software; we are the only ones who deliver an end-to-end solution. That would help our customers bring products to market faster because they don't need to learn from scratch. we have done all the learning for them."Advances in Docking, Enterprise Headsets, and Fingerprint SensorsBeyond the intelligent sensing technology applications for PCs' HPD functions mentioned above, Synaptics also showcased a range of other solutions at Computex 2024, including Docking Station Solutions supporting the latest DisplayPort (DP) 2.0 standard and with the ability to add virtual displays and enhanced user guides for troubleshooting. Low-latency wireless video was also shown, along with audio solutions that enable ultra-low-power Enterprise Headsets that support simultaneous audio streaming with BT Classic, BLE, DECT, and USB-connected devices.Regarding the touchpad and fingerprint solutions for PCs, the fingerprint market is shifting away from traditional palm rest placement towards integration on the keyboard. This change offers a more streamlined and reliable integration.The touchpad sector is witnessing a significant increase in size to match larger screens, emphasizing the importance of advanced palm rejection technology. Synaptics AI-based algorithm can improve palm rejection, addressing common complaints about accidental cursor movements during typing. The algorithm enhances accuracy when distinguishing between fingers and palms, especially at the touchpad edges.Additionally, the integration of Zoom and Teams functions directly into the touchpad was introduced. This allows users to control mute and video settings without moving the cursor, thus simplifying the user experience during online meetings.This customization provides flexibility based on OEM preferences. And, the haptic force touchpad is poised to become dominant in flagship PCs, offering a larger touch area and integrating touch and force using a single chip. This technology allows clicks anywhere on the pad and provides a consistent tactile response, enhancing user experience. The development of piezo technology, combining thin piezo disks with touch sensors, promises even more precise and localized haptic feedback, potentially simulating different textures and offering advanced notification patterns.Taiwan's Pivotal RoleDue to the boom in AI PCs, this year's Computex gathered numerous top executives from major companies and garnered worldwide attention. Ganesan also acknowledged Taiwan's pivotal role in the global PC market and ecosystem, "Taiwan is a big center for PCs, almost all the PC makers and ODMs are here. We're going to see a lot of developments here focused on AI PCs and things like that. It plays a very important role in the hardware ecosystem that supplies worldwide. That is why Taiwan is one of Synaptics' biggest R&D centers."Although the local team is already significant, as the market for AI PCs and other Edge AI applications grows, Ganesan did not rule out the possibility of further expansion, with more investment and the recruitment of more local talent. Synaptics plans to enhance its support and infrastructure in Taiwan, leveraging the country's central role in both implementation and innovation.In a demo at Computex 2024, Synaptics showed how the intelligent sensing solution with its new Edge AI chip enhances the PC user's experience.
In the post-epidemic era, changes in consumer behavior and the rapid expansion of chain businesses have brought new challenges to industries such as retail catering and medical care. In response to this trend, DataVan International Corporation, which has been deeply involved in the point-of-sales (POS) industry for many years, set "Smart Hotel" as its theme at Computex 2024 and showcased innovative products and solutions spanning the fields of smart catering, self-service, AI applications and more.DataVan International Corporation, established in POS products, has made significant progress in software development and customer base expansion in recent years. While continuing to delve into the retail and catering industry, the company has also extended its products into finance, healthcare, and other fields. Zhou Ji-Ping, Chairman of DataVan International Corporation, pointed out that advancements in AI technology have allowed POS products to become one of the fastest industrial equipment in adopting this technology.Take the catering industry as an example. The role played by the POS system in this field has evolved into an intelligent big data platform that connects all aspects. When customers order food from the front desk, the relevant information is not only transmitted to the kitchen and accounting department but will also be transmitted simultaneously to the head office systems. There, personnel in each department can carry out their respective tasks by classifying and distributing data.DataVan International Corporation Grows with Customers by Keeping POS Functions Up-to-dateZhou went on to illustrate that the functions and roles of POS systems have continued to evolve since inception. From the earliest function as a mere cashier, POS products have since evolved to handle accounting work and manage purchase, sale, and inventory of goods. Almost 20 years ago, POS began to undertake simple customer data collection and incorporate portable inventory equipment and network connectivity.In recent years, the addition of technologies, such as sensors, big data, and AI, has catapulted POS systems to the core operations of the retail and catering industry, assisting businesses in implementing the vision of precise data-driven operations. Throughout this process, DataVan International Corporation has applied its technical capabilities and design services to meet market needs by providing more advanced and comprehensive intelligent solutions to assist in customer transformation and promote digitalization for enterprises in varying fields.Recently, the company has been engaging in AI research and development in full force, as well as closely observing market trends to launch a series of solutions. Zhou expressed that the labor shortage caused by the declining birthrate continues to plague many sectors in Taiwan.Coupled with changes in people's behavior during the COVID-19 pandemic, the number of self-service and unmanned stores is increasing. Due to this trend, be it the catering, retail, hotel, gyms, or medical institution industries, they all require intelligent platforms that are intuitive, easy to operate, and highly integrated with various systems within their field to reduce labor requirements while meeting the requirements for management and service quality. At COMPUTEX 2024, DataVan International Corporation used "Smart Hotel" as its theme to demonstrate how the company's innovative solutions assist customers in creating comprehensive smart applications.DataVan International Corporation Builds Smart Hotel Application Scenarios to Showcase Technical CapabilitiesWhen entering the DataVan International Corporation booth, visitors are greeted by a self-service check-in machine. Travelers can quickly complete check-in procedures without waiting in line.The machine can identify and authenticate travelers quickly via Optical Character Recognition (OCR), complete the check-in process, and grant the key card efficiently. After entering the hotel, travelers can enjoy a convenient dining experience through the smart catering platform.The unmanned ordering machine supports multiple languages and provides information through a detailed menu. Seamless integration of the ordering equipment with the back-end KDS (Kitchen Display System) drastically improves the efficiency and accuracy of meal delivery.In terms of entertainment and ticketing, DataVan International Corporation demonstrates the smart wristband solution developed in cooperation with Japan's Tensheng Company. Travelers only need to wear a stylish wristband to use the hotel's gym, swimming pool, and other facilities.The QR code on the wristband can be used for identification and bill payment, thereby eliminating the need to carry cash or credit cards. This innovative smart wristband is not limited to be used within hotels.It can also be connected to a wide variety of external systems to provide multiple services. For example, with the concert economy or sports events that have emerged in recent years, hotels can cooperate with event organizers to provide wristbands with electronic ticket functions connected to a ticketing system, allowing guests to quickly and seamlessly access event venues. The enjoyment of such convenient and unique experiences will create new business opportunities for hotels.Zhou said that AI technology plays a critical role in DataVan International Corporation's smart hotel scenarios. Travelers can interact with AI customer service at any time through voice, text, and video messaging to obtain information and services. AI customer service can also quickly understand user intentions, provide customized information, and greatly improve the quality and efficiency of services.In addition to understanding the high-quality experience in smart hotel scenarios enabled by forward-looking technology, DataVan International Corporation's POS and kiosk application solutions will also be on display at COMPUTEX 2024. The POS section features the Amber series that utilizes a 15.6-inch screen and is equipped with the 12th-generation Intel processor.The series features a thin and light casing design with a stylish aluminum alloy exterior that offers excellent performance. It also contains a unique easy-to-maintain design and smart cooling technology that greatly improve reliability and product life cycle. The Amber series is specifically designed for industries such as catering and retail that emphasize high requirements for the appearance and performance of POS terminals. It is DataVan International Corporation's blockbuster product in the high-end POS market.In terms of self-service kiosk solutions, a variety of products tailored for different scenarios will be on display with applications covering meal ordering, medical registration, transportation ticketing, and other fields. DataVan International Corporation's kiosk solution integrates advanced human-machine interaction technologies such as facial recognition, voice recognition, and touch controls to provide a smarter and more convenient self-service experience.From the exhibition content above, the company's technical strength and cross-industry cooperation results are evident. Looking forward to the future, Zhou pointed out that DataVan International Corporation will continue to improve its leading position through existing advantages.This includes launching a number of innovative products in the second half of 2024, such as the next-generation M-POS (mobile POS) solutions featuring a thin and stylish design, as well as self-service kiosk terminals that support multiple payment methods. In terms of software, cloud services will be launched in the existing smart business management platform to help companies accurately understand market trends through big data analysis and AI technology. As mentioned above, DataVan International Corporation will continue to lead the innovation of POS and kiosk with new technologies that strengthen the company's competitiveness in the new era and meet the transformation requirements of customers in various fields.Zhou Ji-Ping, Chairman of DataVan International Corporation, pointed out that advancements in AI technology has allowed POS products to become one of the fastest industrial equipment in adopting this technologyPhoto: DIGITIMESZhou Ji-Ping, Chairman of DataVan International Corporation, points out the practical applications of forward-looking technologies such as AI, big data, and sensors for the "smart hotel" scenario at COMPUTEX 2024Photo: DATAVANAt COMPUTEX 2024, DataVan International Corporation used "Smart Hotel" as its theme to demonstrate how the company's innovative solutions assist customers in creating comprehensive smart applicationsPhoto: DATAVANUnder the leadership of chairman Zhou Jiping and new general manager You Zhicheng with a construction background, DATAVAN will combine its electronic resources, construction resources, and hotel resources to strengthen IT software research and development, create a new smart technology service team, and create an international POS international blue ocean marketPhoto: DATAVANGeneral Manager You Zhicheng pointed out that in the future, DATAVAN will further expand into business fields such as smart management, smart buildings with efficient services with few people, and green building circular economy. The prospects are infinitely brightPhoto: DATAVAN
Inevitably, the adoption of generative AI and LLM (Large Language Model) has fueled more intelligence and efficiency in our lives and works than in the past decade.Moving forward, the demand for running AI and LLM at the edge devices is growing as it provides lower latency, higher privacy, and more flexibility. It is set to redefine the level of intelligence of smart devices as well as broaden the landscape of mobile scenarios.Fibocom (Stock code: 300638), a global leading provider of IoT (Internet of Things) wireless solutions and wireless communication modules, launches a series of on-device AI solutions powered by Qualcomm QCS8550 and QCM6490 processors from Qualcomm Technologies, Inc. The solutions are designed to satisfy compute-intensive application scenarios such as robotics, automated vehicles, video collaborations, smart retailing, etc., accelerating industrial digitalization and intelligent transformation.Flagship on-device AI solution powered by Qualcomm QCS8550 processorUtilizing the powerful Qualcomm QCS8550 processor, Fibocom's flagship on-device AI solution is designed to deliver strong performance and unparalleled multimedia capabilities. Equipped with an octa-core CPU and an Adreno 740 GPU, the solution can support up to 4 concurrent displays, and 8K video encoding and decoding. It serves as a strong core for industries requiring high-definition video playing, fast data analytics, and lower latency like automated vehicles, robotics, remote medical surgery, computer vision systems, live streaming, video conference systems, and more.Premium on-device AI solution powered by Qualcomm QCM6490 processors, piloting the high-end AIoT marketThe solution developed using the Qualcomm QCM6490 processor, features an octa-core processor with high-speed HVX (Hexagon Vector Extension) technology, and a high-performance graphics engine to allow smooth 4K video playing and multi-channel camera inputs. In addition, the solution is capable of allowing a maximum of 5 ISPs (Image Signal Processing) and up to 5-8 camera streams simultaneously, helping customers to ease their concerns on multi-camera deployments as well as dual-screen display scenarios.The solution offers flexible wireless connections such as 5G, Wi-Fi, Bluetooth, and is equipped with a GNSS receiver for precise navigation both indoors and outdoors. In terms of software, the solution supports the mainstream operating systems: Android, Linux, and Windows. Leveraging the computing power of up to 13 TOPS, the solution efficiently helps customers handle data-intensive computation and processing, running various 1.3B/3B/7B open-source LLMs on the device, making it an ideal solution for smart retail, In-Vehicle Infotainment (IVI) and industrial inspection."We are excited to see our powerful Qualcomm processors, the QCS8550 and QCM6490, being utilized in Fibocom's innovative on-device AI solutions," stated Dev Singh, Vice President of Business Development and Head of building, enterprise & industrial automation at Qualcomm Technologies, Inc. "This collaboration is a testament to our commitment to advancing AI capabilities at the edge, enhancing performance and efficiency across a range of applications from industrial automation to smart retail.""It is paramount to master the productivity of AI and create value-added solutions from the edge side for our customers that are in urgent need of building their smart devices based on our solutions. We are thrilled to develop these solutions based on the advanced and powerful chipsets from Qualcomm Technologies, as it not only provides the fundamental architecture of edge intelligence, also enriches flexibility in network connections such as 5G/Wi-Fi/Bluetooth," said Ralph Zhao, VP of MC BU at Fibocom. "In collaboration with Qualcomm Technologies, Fibocom is dedicated to injecting new versatility to the future of intelligence, and accelerating the implementation of our collaboration in robotics, industrial automation, live streaming, and more."
Fibocom (Stock code: 300638), a global leading provider of IoT (Internet of Things) wireless solutions and wireless communication modules, launches 5G + Wi-Fi 7 solution for 5G FWA during Computex 2024, satisfying the demand for higher data throughput and extending Wi-Fi 7 performance in the home, SMB (Small and Medium-sized Business) and industrial scenarios.Boosting 5G + Wi-Fi 7 Capabilities of Peak Performance Up to BE7200The ultra-high bandwidth and low latency features make the FG370 an optimal wireless solution for 5G FWA scenarios like smart home, SMB, and industrial monitoring. The multiplier solution can be realized by coupling with the latest Wi-Fi 7 technology, unleashing the full potential of a lightning-fast wireless experience for end users.Compliant with the IEEE 802.11be (Wi-Fi 7) standard, the 5G + Wi-Fi 7 solution supports a 4K QAM (Quadrature Amplitude Modulation) modulation scheme as well as up to 160MHz channel bandwidth to expand the data transmission capacity and reduce latency efficiently. It is worth mentioning that the solution also supports dual-band 2.4GHz 4 x 4 and 5GHz 5 x5 to achieve MLO speed of up to 7.2Gbps.In terms of reliability and transmission efficiency, the integration with OFDMA, Multi-RU, and MU-MIMO technologies guarantees interference-free, signal-boosted performance. Driven by the unique 5T5R system empowered by the Wi-Fi 7 chipset, the end users will benefit from 5dB antenna gains and extended network coverage versus previous generations."As the world's first 5G module developed from MediaTek's T830, Fibocom FG370 has been recognized and deployed by global Tier 1 operators' 5G CPE commercialization since its first debut in October 2023. Evolved with the latest Wi-Fi 7 standard, Fibocom launched the tri-band BE19000 solution based on FG370 during MWC 2024," said Simon Tao, VP of MBB Product Management Dept., Head of MBB BU at Fibocom. "Up to date, Fibocom is glad to be the first module vendor launching the industry-first dual-band BE7200 solution based on FG370 compatible with Wi-Fi 7 chipset, significantly boosting the overall performance in terms of speed performance, cost, and frequency regulatory requirements. The Tri industry-leading innovations prove Fibocom's product capabilities on the MediaTek T830 platform and the dedication to helping customers win the market."Welcome to visit Fibocom stand #K1215a at Computex 2024 for more details about 5G + Wi-Fi 7 demonstrations.Fibocom extends market leadership in 5G FWA by launching 5G + Wi-Fi 7 solution at Computex 2024Photo: Company
Fibocom (Stock code: 300638), a global leading provider of IoT (Internet of Things) wireless solutions and wireless communication modules, today announces the expansion of its RedCap module portfolio by launching FG332 during Computex 2024 and introduces a cutting-edge 5G RedCap CPE solution integrated with the newly launched FG332 and the latest Wi-Fi 7/Wi-Fi 6 technologies.The industry-piloting layout of the 5G RedCap module and RedCap CPE solution is set to accelerate the 5G RedCap commercialization. The FG332 is a 3GPP R17-compliant module tailor-made for the lightweight of 5G IoT applications with optimized power consumption, enhanced network coverage, and R17 small data transmission.Developed from MediaTek's T300 chipset, the world's first 6nm Radio Frequency System-on-Chip (RFSOC) single die solution for 5G RedCap, the module supports 5G SA and a maximum bandwidth of 20MHz, enabling the peak transmitting performance up to 227Mbps downlink and 122Mbps uplink, and it is also backward compatible with 4G, ensuring the constant network connections. In hardware excellence, FG332 adopts the LGA form factor packaged at 29 x32mm and utilizes the cutting-edge hardware density design for a compact PCB area, allowing greater flexibility in product design.In terms of pin-compatibility, the module is pin-compatible with Fibocom LTE Cat 4 module series NL668, thus easing the customer's concern about hardware investment and time to market. Leveraging the optimized power consumption system, FG332 performs in a highly power-efficient mode and helps the UE (User Equipment) to extend the battery life. The module supports regional frequency bands in Europe, North America, and APAC, and the engineering sample of FG332-NA will be available in Q3 2024.FWA is gaining a larger market share to enable everyone's access to broadband networks anywhere, anytime. The FG332 is positioned for the mass deployment of CPE (Customer-Premises Equipment) and portable mobile hot spot devices with RedCap capabilities.Propelling the high-cost performance and low-power consumption of 5G RedCap, in conjunction with the latest Wi-Fi 7 and Wi-Fi 6 technology, Fibocom further offers a complete 5G RedCap CPE solution integrated with the latest Wi-Fi 7 and Wi-Fi 6 technologies, pivotal for consumer-level and enterprise-level users to enjoy the deadzone-free and interference-free wireless experience.The new generation 5G RedCap plus Wi-Fi 7 solution delivers higher data throughput of BE3600 and lower latency, compatible with IEEE 802.11a/b/g/n/ac/ax/be protocols, enabling the end devices to run at full speed of a maximum of 3571Mbps under the concurrent dual-band circumstance of 2.4GHz + 5GHz. The solution features Wi-Fi 7 benefits 4K-QAM for mass data loading thus enhancing the throughput, along with MLO, MRU, and preamble puncture to fully utilize the bandwidth, and software intelligence to improve the data transmission efficiency.For a more popularized 5G RedCap + Wi-Fi 6 selection, the solution utilizes a hardware accelerator for power optimization while maintaining an incredible speed of 3Gbps. The solution outperforms in both 2.4GHz and 5GHz frequency bands, the maximum 2402 Mbps can be achieved under the working mode of Wi-Fi 2X2 160MHz at 5GHz as well as 574Mbps under the working mode of Wi-Fi 2X2 40MHz at 2.4GHz. In terms of network coverage, uncompromised signaling and consistency will bring stable and seamless connectivity to end users."We are excited to unveil the FG332 RedCap module powered by MediaTek's T300 chipset at Computex 2024. At Fibocom, we are always dedicated to leading the way in 5G innovation, thanks to the cooperation with our ecosystem partner MediaTek, we launched the 5G RedCap CPE solution that integrates advanced Wi-Fi 7 and Wi-Fi 6 technologies, delivering exceptional data throughput and low latency," said Simon Tao, VP of MBB Product Management Dept., Head of MBB BU at Fibocom. "These innovations underscore our commitment to driving 5G RedCap monetization, ensuring our customers develop the FWA devices based on the latest technologies and capture the market opportunity ahead of time."Welcome to visit Fibocom stand #K1215a at Computex 2024 for more details about 5G RedCap demonstrations.Fibocom FG332
Nvidia CEO Jensen Huang addressed the company's partnerships with Arm, High-Bandwidth Memory (HBM) manufacturers, Cloud Service Providers (CSP), and Taiwan's high-tech supply chains, which are led by TSMC, during a press Q&A session at Computex 2024 in Taipei. He also shared his vision for AI and commented on the competition. The following is a summary of the Q&A session.Q: Currently, Nvidia's HBM partner is SK Hynix. When will Samsung join as a partner? Rumor has it that Samsung's HBM does not quite meet Nvidia's requirements.A: For Nvidia, HBM memory is extremely essential. Right now, we're expanding rapidly. We provide the Hopper H200 and H100. Blackwell B100 and B200 are available. Grace Blackwell GB200 is here. The amount of HBM memories that we are ramping up is quite significant. The speed that we need is also very significant. We work with three partners. All three are excellent. SK Hynix, of course, Micron, and Samsung. All three will give us HBM memories. We are working diligently to qualify and integrate them into our manufacturing processes as quickly as possible.Q: Could you please elaborate on the delay in Samsung and Micron memory becoming HBM-certified? There was a rumor that Samsung's memory failed your energy consumption and heat verification tests. Can you finally disclose whether you've certified any Samsung memory or HBM?A: No, it did not fail for any of these reasons. Furthermore, there is no relationship between the material you have read and our business. Our relationship with Samsung and Micron is proceeding smoothly. The engineering is the only task that remains. It is yet to be completed. Yes, I wanted to finish it by yesterday, but it has not been completed. Knowing this requires patience. Yes, but there is no story present.Q: Can you provide an update on your collaboration with Arm?A: Of course. Our collaboration with Arm is outstanding. We are working on Grace, our CPU designed specifically for AI and high-performance computing as you are aware. We are rather thrilled about the Arm architecture since it lays the foundation of Grace. It's going to be fantastic for our data centers, and it's going to be an incredible product for the industry. Our relationship with Arm is very strong. We're working closely together to bring new technologies to market, and we're very excited about the future of Arm-based CPUs in our product lineup.Q: CSPs like Google and Microsoft are developing their own AI chips, which could have an impact on Nvidia. The second question is whether Nvidia will conduct business, which you may consider doing, ASIC chip development?A: Nvidia is quite different. As you know, Nvidia is not an accelerator. Nvidia accelerates computing. Do you know the difference? I explain it every year, but no one understands. The Deep Learning Accelerator cannot handle SQL data. We can. The deep learning accelerator cannot handle photos.The accelerator cannot be used for fluid dynamic simulations. We can't make sense. Nvidia promotes faster computing. It's quite versatile. It is also quite effective for deep learning. Does that make sense?Thus, Nvidia offers more versatile accelerated computing. Consequently, more use correlates with greater utility. It has a low effective cost. Allow me to explain with an example. Many believe that a smartphone is far more expensive than a phone. In the past, a smartphone was once more expensive than a phone; at US$100, it was the replacement for the music player. It took the place of the camera. It took the place of your laptop. On occasion. That is correct, isn't it? Therefore, the smartphone was relatively affordable due to its adaptability. It is, in actuality, our most valuable instrument.Same thing with Nvidia accelerated computing. Second, Nvidia's architecture is incredibly versatile and practical. It is a component of every cloud. It is located in GCP, OCI, Azure, and not AWS. It's located in local clouds. In sovereign clouds, that is. It exists everywhere—on-premises, in private clouds, and everywhere else. Because we have such a large audience, developers come to us first. It makes sense. Because it operates everywhere if you program for or target CUDA. It only operates at the accelerators if you program for one of them. This gets us to the second reason for which our value to the client is quite high: in addition to reducing the CSPs' workload, Nvidia provides them with clients because they are CUDA customers.Our objective is to migrate consumers to the cloud. As cloud companies expand their Nvidia capability, their revenue increases. When they enhance the capability of their proprietary ASICs, their expenses rise. It is possible that the revenue will not rise.We attract consumers to our cloud services, and we are really satisfied with this achievement. Nvidia is positioned in a distinct manner. Firstly, our versatility stems from the abundance of exceptional software we possess. Secondly, we exist in every cloud and are present in all locations. We are an attractive prospect for developers and highly valuable to CSPs. We attract and generate clients for them.Q: Could you tell us more about your thoughts on the future of AI and how Nvidia is positioned in this space?A: AI is the most potent technological force of our era. Nvidia is at the forefront of this transformation, which will revolutionize every industry. We are offering the platforms, systems, and tools necessary to stimulate AI innovation. We are facilitating the next surge of AI advancements by utilizing a variety of software libraries, including CUDA and TensorRT, as well as our AI platforms, including Clara for healthcare, Isaac for robotics, and Drive for autonomous vehicles. Our collaboration with partners from various sectors guarantees that AI can be implemented efficiently and that we will persist in expanding the limits of what is feasible. Nvidia will be playing a critical role in the journey of AI's impact on every aspect of our lives.Q: Question about building technology. Last week, Intel and AMD unveiled the UA Link Consortium. They say that your Nvidia is proprietary. Therefore, being open is much better. So what are your thoughts on that? I believe we have used proprietary technology throughout the history of this industry. Intel is x86. ARM is ARM architecture. Now, Nvidia has NVLink. So, what do you think about open vs. proprietary?A: For end users, performance and cost performance are favorable. End users also tolerate proprietary technology as long as it offers acceptable performance and cost-effectiveness.Proprietary and open standards have always existed, is that correct? The market has always been both open and proprietary. Intel is x86, AMD is x86, ARM is ARM architecture, Nvidia is NVLink, and so on. The best way to look at it, in my opinion, is in terms of a platform's transparency, its capacity for innovation, and the value it adds to the ecosystem.The most crucial factors are whether it stimulates innovation, adds value to the ecosystem, and opens up chances for everyone, regardless of whether it is proprietary or open. I believe Nvidia has accomplished that. Not only is NVLink an amazing piece of technology, but our industry partnership is even better. Our networking devices, NVSwitch and Quantum-2, are compatible with PCI Express.We collaborate with industry to develop a great deal of open technology, but we also innovate and produce proprietary technologies. Actually, it's not either/or. It's all about advancing innovation and adding value to the ecosystem.Q: How does Nvidia deal with increased competition in the specialized AI chip market?A: Nvidia is a market maker, not a sharetaker. Does this make sense? We are always inventing the future.Remember, GeForce was the first graphics card designed for gaming. We played a significant role in the early development of PC gaming. All of our work with accelerated computing was pioneering; we began working on self-driving cars, autonomous vehicles, and robotics over a decade ago and are still working on it, and of course, generative AI.Nobody could argue that we were there on the first day, inventing the entire category. As a result, some people claim it is their top priority right now. But it's been our top priority for 15 years. As a result, the company's culture, as well as its personality, revolve around inventing the future.We're dreamers. We are inventors. We are pioneers. We do not mind failing. We do not mind wasting time. We simply want to develop something new. So, that is our company's personality. So I believe our approaches are really different.As you are aware, we are not simply building GPUs; the systems shown on stage are only half of the total. All of these mechanisms were designed by us, and we then opened them up to the ecosystem so that everyone could build on them. But someone had to build the first one. We completed all of these. We built all of the initial ones and someone needs to write all of the software that makes this all function. We made everything work. So Nvidia is more than simply a chip company. We are actually an AI supercomputer, AI infrastructure, and AI factory company. We're also quite good at developing AI.How do you determine what computer to create if you don't comprehend AI? Nobody is going to teach it to you. So, 15 years ago, we had to start learning how to design AI so that we could build these computers to operate it. As a result, there are numerous unique aspects to our business. It's difficult to compare us to someone else.Some argue that the CSPs are competitors because they manufacture chips. Remember, all of our CSPs are Nvidia customers. Nvidia is the only business that provides an accelerated computing platform that is available in every cloud. That is versatile enough to handle everything from deep learning and generative AI to database processing and weather simulation. We're rather a unique company, extremely different.Q: Regarding Hopper and Blackwell, it appears that there has been a shift in messaging since GTC, with a greater emphasis on value, cost per token, and cost performance. The word value appears to be a frequently mentioned topic. I'm just wondering whether that's a response to customers. Have they, are they concerned about pricing, and how do you approach pricing for these sorts of novel chips?A: Pricing is always value-based. If a product is priced correctly, demand is phenomenal. There is no such thing as great demand at the incorrect price; it does not exist. If you have the appropriate price and provide the correct value, demand will be incredible. Our demand is great. You could reduce your pricing to nothing if there is no demand. It's not about the price, right? It did not. You did not get low enough. So pricing is definitely determined by market demand. So I believe our price is appropriate.The way we set it is not an easy exercise, but we have to build the entire system, develop all the software, and then break it up into a whole bunch of parts, and we sell it to you as a chip. But in the end, we're really selling the AI infrastructure. All the software that goes along with it is integrated into your software. So Nvidia, what we build is AI factories.What we're building are AI factories. We deliver it as a chip. It looks a little like this. Microsoft used to deliver the operating system and Excel office. They wrote the software, but they provided it to you via floppy disk. So the question is: Does Microsoft sell floppy disks? No, that is only a delivery vehicle. So, in many ways, our chips serve as delivery vehicles for the software notion of the AI factory infrastructure. That's the simplest way to think about it.But, in the end, we deliver AI factories successfully. In terms of cost, we have reduced energy use by 45,000 times during the past eight years. Moore's Law could not have come close to that. Over the last ten years, we have reduced training costs by 350 times. You know, probably around 350 times. Call it 1000 or 1200 times. We're only eight years into the decade. So there are two years remaining at the current rate of growth. There are still many X factors left. Moore's Law, therefore, cannot accomplish this. Not even close, even on his finest days. So we're lowering our energy consumption.We're driving down the cost of training. Why are we doing that? So that we can enable the next level of breakthroughs. So the reason why the world has trained us is training such giant large language models without ever thinking about it. It's because the marginal cost of training has dropped by 1000 times in ten years. Assume that something you accomplished decreased a thousandfold over the course of a decade. Assume the cost of going from Australia to here is $3 rather than $3,000 or more. Okay. Instead of 24 hours, it takes 20 minutes. I believe that you could go from Australia to Taiwan in 24 minutes or 2 minutes. Yeah, it takes two minutes and costs three dollars. I bet you visit Taiwan frequently. You come here merely to visit the night market and then return home. Do you understand what I am saying? Isn't that correct?So, by driving down the marginal cost, by driving down the energy consumed, by driving down the cost, the time we are enabling generative AI, if we didn't do that, generative AI would still be a decade away. That's why we're doing it to enable the next breakthrough in AI because we believe in it.Q: Are you concerned about the geopolitical risks associated with your investments in Taiwan?A: We invest in Taiwan because TSMC is fantastic. I mean, not typical. TSMC has incredible advanced technology, an exceptional work atmosphere, and excellent flexibility. Our two businesses have been working together for about a quarter of a century. We genuinely get each other's beat. We seem to be working with friends, so it's almost as if we have nothing to say. You say nothing at all. We simply understand each other.As a result, we can construct extremely complex things in large quantities and at great speeds. This is not common for TSMC. You can't just leave it up to someone else to do it.The industry ecosystem here is incredible. The ecosystem surrounding TSMC, both upstream and downstream, is quite rich. We've been dealing with them for a quarter-century.Thus, TSMC and the ecosystem around it, which include Wistron, Quanta, Foxconn, and Wiwynn. Inventec and Pegatron—how many of them? Asus, MSI, and Gigabyte are all fantastic firms that are sometimes overlooked and undervalued. This is actually the case. So, if you're from Taiwan, I think you should be really proud of the ecosystem here. If you are a Taiwanese company, you should be very proud of your accomplishments. This is a fantastic place. I am very proud of all of my partners here. I am incredibly grateful for everything they have done for us and the assistance they have provided over the years. And I'm delighted that this is a new beginning, building on all of the experience that these companies have gained over time. All of these wonderful companies have gathered incredible expertise over the last two and a half to three decades.
Targeting the booming industrial control applications driven by the AI wave, global AI PC and AIoT solutions leader MSI is unveiling its next-generation Autonomous Mobile Robot (AMR) solution at COMPUTEX, equipped with the NVIDIA Jetson AGX Orin module. This advanced AMR solution leverages intelligent vision and AI technologies to deliver unparalleled versatility and efficiency to industries such as warehousing, automotive, semiconductor, and panel manufacturing.Additionally, MSI's Smart 360° AI Patrolling Solution integrates cutting-edge AI image recognition and edge computing technologies to help enterprise clients achieve remote inspection, 24-hour unmanned surveillance, and production line safety protection. This solution targets industrial IoT opportunities in smart factories and smart cities, further diversifying MSI's AI product line and applications.MSI, dedicated to innovative research and development, was the first in the industry to launch AI PCs and continues to invest in AI application product development and hardware-software integration to meet enterprise needs.MSI AMR-AI-Cobot Pro Autonomous Mobile Robot Powered by NVIDIAThe MSI AMR-AI-Cobot Pro is equipped with a smart robotic arm designed to enhance productivity and safety across industries. Powered by the NVIDIA Jetson AGX Orin platform, it features precise SLAM-based navigation, efficient 3D route planning, accelerated object detection, and seamless integration with third-party applications. This makes it ideally suited for use in diverse sectors including semiconductor, panel manufacturing, electronics, warehousing, logistics, textiles, biotechnology, and the food industry.MSI AMR-AI-Delivery Robot Pro Autonomous Mobile Robot Powered by NVIDIAEquipped with advanced AI features, the MSI AMR-AI-Delivery Robot Pro includes smart face and speech recognition for safe and accurate deliveries. Powered by the NVIDIA Jetson AGX Orin platform, it excels in diverse environments, boasting SLAM capabilities, 3D navigation routes, and accelerated object detection. Its support for third-party APIs ensures seamless integration with other applications, making it highly versatile for use in factories, warehouses, medical facilities, offices, residential areas, and shopping centers.MSI autonomous mobile robotsThe "Smart 360° AI Patrolling Solution" uses MSI's intelligent industrial computers, MS-C910 and MS-C909, as the core computational units. These are combined with 360-degree panoramic video products, solar panels, EV-grade lithium iron phosphate batteries, and eco-friendly paper displays (EFPD). Through advanced AI and edge computing, this solution delivers efficient security monitoring and intelligent disaster prevention alert functions.MSI MS-C910 intelligent industrial computersFor example, in outdoor parking lots, the MS-C910 can serve as a small server system for AI image computation, data center tasks, and transmitting parking management information to an edge box, while the MS-C909 collects and manages solar energy charge and discharge information, sending it back to the C910. With license plate and facial recognition technology, this system intelligently identifies and displays vehicle entry and exit, parking space status, and provides real-time directions for vehicle retrieval, enhancing operational efficiency and improving overall management integration.MSI MS-C909MSI @ COMPUTEX TAIPEI 2024Date: Tuesday, June 4 - Friday, June 7, 2024Time: 9:30 - 17:30Booth: M0806Venue: 4th Floor, Hall 1, Taipei Nangang Exhibition Center
Allxon today announced Allxon OOB Cloud Serial Console, marking a milestone as the industry's first offering tailored for the NVIDIA Jetson family. Allxon OOB Cloud Serial Console empowers direct device-level troubleshooting and remote access via debug UART. Allxon will be hosting a live demonstration at COMPUTEX 2024, showcasing this powerful technology alongside AVerMedia.Partnering with AVerMedia, which unveiled its Standard Carrier Board D115W for the NVIDIA Jetson Orin NX and NVIDIA Jetson Orin Nano modules, Allxon showcased its power-related features for swift disaster recovery.Allxon OOB Cloud Serial Console leverages NVIDIA Jetson devices' serial console port through the hardware interface for seamless remote access and troubleshooting. This innovative solution provides unprecedented convenience and top-tier security. Unlike traditional SSH methods, Allxon OOB Cloud Serial Console eliminates the need for cumbersome server setups and fixed ports, significantly boosting security and operational flexibility.Allxon OOB Cloud Serial Console has received worldwide acclaim during its early access (EA), attracting partners including telecommunications giants, smart security innovators, independent hardware vendors, and leading-edge AI software developers. Allxon is thrilled to announce that its OOB Cloud Serial Console general availability (GA) is set for quarter three of 2024.Allxon will showcase Allxon Cloud Serial Console at COMPUTEX 2024 at booth #L1309a, Hall 1, 4F in Nangang Exhibition Center Halls 1 and 2 from June 4th to 7th.Learn more about Allxon's platform https://www.allxon.com/solutions/swiftdr.
Chenbro (TWSE: 8210), a pioneer in the design and manufacturing of own-brand rackmount systems, is participating in COMPUTEX Taipei from June 4 to 7. Focusing on the theme of AI, Chenbro is showcasing its latest NVIDIA MGX chassis products and OCP DC-MHS cloud server chassis solutions, seizing more AI business opportunities.This year, Chenbro is flexing its muscles with its OTS (Off-The-Shelf), ODM/JDM, and OEM Plus service models. In addition to highlighting OTS server chassis solutions for AI, Cloud, Storage, and Edge applications, Chenbro is exhibiting JDM/OEM products co-created with customers, showcasing its strong R&D design capabilities and manufacturing prowess and realizing win-win partnerships.Aiming for Next-Generation Server Development with Unreleased Enclosure SolutionsEric Hui, President of Chenbro, highlighted the role of NVIDIA MGX in bringing accelerated computing into any data center with modular server designs. These designs offer multiple form factors, including 1U, 2U, and 4U, enabling diverse configurations of GPUs, CPUs, and DPUs to fulfill various computing requirements. As an NVIDIA partner, Chenbro is showcasing NVIDIA MGX server chassis in 2U and 4U form factors to address enterprise-level AI application needs, and is exhibiting 1U and 2U compute trays to support customers deploying the GB200 NVL72 and NVL36.Chenbro is also introducing a new generation of cloud server chassis solutions compliant with the OCP DC-MHS standard, and collaborating with Intel on server architecture. Chenbro's DC-MHS enclosure solutions offer Full Width (FLW) and Density Optimized (DNO) specifications in 1U and 2U form factors, supporting E3.S and E1.S storage devices, to meet product development demand for next-generation high-performance servers.Also at Chenbro's booth is a unique data center display featuring a blend of virtual and physical cabinets, showcasing its Tri-Load high-density storage server chassis solution, which has won both the MUSE and TITAN design awards. Known for its exceptional heat dissipation and load-bearing mechanisms, the Tri-Load series ensures easy maintenance and stability in data center operations. Chenbro is also showcasing Edge AI solutions with short-depth server chassis capable of accommodating GPU deployment, enabling AI computing at the edge.Collaborating for a Win-Win Partnership Corona Chen, CEO of Chenbro, underscored the company's commitment to tracking the product roadmaps of tech giants such as NVIDIA, Intel, AMD, and Ampere. By leveraging modular design, Chenbro ensures maximum compatibility and can offer a wide range of server chassis solutions, adhering to the slogan "Whatever's inside, Chenbro outside." Through diverse business service models, Chenbro is actively collaborating with global customers to seize opportunities in the AI and cloud server industry.This year, along with TechTalk sessions that share innovative product and industry insights, Chenbro is also showcasing joint product demonstrations with motherboard partners such as Gigabyte, MSI, ASRock, Tyan, and Compal, as well as storage device partners like Toshiba, Seagate, and Kingston. In addition, server products created through JDM/OEM collaboration with Hyve Solutions, Wiwynn, Pegatron, MSI, ASRock, and ADLINK are on display. Lastly, Chenbro will hold a joint VIP night co-hosted with JPC and FSP, showcasing Chenbro's collaborative achievements with customers and partners in win-win partnerships.Amidst the wave of green exhibitions, Chenbro is further showcasing its commitment to sustainability through participation in COMPUTEX ESG GO and the Sustainable Design Award competition. Chenbro applies the principle of 3R (Reuse, Reduce, and Recycle) not only in product design but also in booth design, paving the way for a low-carbon, sustainable future.
At Computex 2024 in Taipei, Taiwan, VIA Labs announced PortSense, a suite of manageability and intelligent connectivity features for USB Hubs that sets a new standard for docking station functionality in business and professional environments.PortSense is an exclusive VIA Labs hardware and software solution embedded in the latest revisions of VIA Labs hub products, and it enables supported products to retrieve USB descriptor information from connected devices, even without a host system. USB descriptors contain vital details about the connected USB devices, such as the device class and capabilities, the product name and manufacturer information, serial number if present, and much more.Typically, a host system uses this information to identify, configure, and interact with connected devices. However, with PortSense, a managed docking station can collect usage data, perform tasks like pre-configuration and device inventory, and assist in implementing corporate policies.PortSense is available in VIA Labs VL817 USB 5Gbps Hub and VL822 USB 10Gbps Hub. The VL832 USB4 Endpoint Device has an integrated USB 10Gbps hub and supports PortSense.While PortSense can function in autonomous mode, its true potential is unlocked when integrated into a connected platform where AI could be applied for analytics and policy control. Hubs with PortSense can communicate with an external controller using a standard I²C interface to share collected USB descriptor information and offer a range of manageability controls, such as enabling or disabling ports, changing connection speeds, resetting devices, and toggling USB battery charging.These controls can be applied on a per-port basis, including the upstream port, providing granular control over each connection. PortSense can be used to collect detailed user activity and enable analysis of usage patterns over time, making it perfect for hot-desking setups or as part of a comprehensive office management solution.With PortSense, VIA Labs is enhancing the modern workspace with more intelligent and connected solutions. The advanced capabilities of PortSense extend beyond basic data collection and control, providing valuable insights that help businesses manage their USB peripherals more effectively.For instance, IT administrators can use PortSense to maintain an inventory of connected devices, ensuring that only authorized devices are in use. In addition, the AI-Ready features of PortSense support enforcing security policies, such as maintaining an allowlist or blocklist of devices or limiting the use of specific USB device classes in sensitive areas.By acting as one layer of a comprehensive security strategy, PortSense helps reduce the risk of data breaches and unauthorized data transfers. This approach enhances operational efficiency and security, helping companies maintain a reliable and secure technology infrastructure. VIA Labs has recently released a white paper with more technical details about PortSense, which can be found here: https://www.via-labs.com/pressroom_show.php?id=98VIA Labs PortSense: An exclusive suite of manageability and intelligent connectivity features for USB Hubs
1/5 pages
Members only
Sorry, the page you are trying to open is available only for our paid subscribers.