CONNECT WITH US
Tuesday 1 April 2025
Memory supply tightens: Why forecasts matter more than inventory
The memory market is undergoing a significant transformation as suppliers pivot from traditional inventory-based models to forecast-driven production
Wednesday 19 March 2025
Chenbro showcases latest AI server chassis solutions at Nvidia GTC 2025
Chenbro makes its first appearance at Nvidia GTC 2025, displaying the Nvidia GB300 Grace Blackwell Ultra Superchip compute tray and a full range of AI server chassis solutions based on the Nvidia MGX architecture, including 1U, 2U, and 4U models. At GTC, Chenbro not only highlights its product development expertise but also supports customers in demonstrating next-gen AI servers, promoting mutual growth in the server industry.Full range of Nvidia MGX products to meet enterprise and large CSP data center needsNvidia CEO Jensen Huang unveiled the transformative Nvidia GB300 NVL72 platform at the GTC keynote. Chenbro is showcasing the latest compute trays, featuring Nvidia GB300 and GB200 NVL72 solutions, and accelerated Nvidia ConnectX-8 SuperNIC and Nvidia BlueField-3 DPU networking capabilities, compatible with the NVL72 liquid-cooled rack at GTC. The 2U MGX chassis offers flexible configuration and future compatibility for enterprise applications, while the 4U MGX air-cooled and liquid-cooled chassis are tailored to meet enterprise data center needs.Chenbro CEO Corona Chen stated that with strong R&D and manufacturing capabilities, Chenbro has collaborated with Nvidia to offer a complete AI product line based on the Nvidia MGX modular architecture. Through Open Chassis, JDM/ODM, and OEM services, Chenbro provides server chassis solutions ranging from standardized products to fully customized development that meet the needs of system integrators and brand customers. This supports the diverse deployments of AI, HPC, and big data in data centers and enterprises.Deepening AI product development and creating a win-win collaboration modelChenbro made its debut at GTC, led by CEO Corona Chen. Nvidia GTC is an annual AI grand event that brings together thousands of developers, innovators, and business leaders from around the world to share how AI is driving the next generation of development. She stated that Chenbro has achieved outstanding results in the AI and cloud server sectors. At this year's GTC, several major tech companies showcased products co-developed with Chenbro, highlighting the industry's collaborative, win-win growth.Corona further emphasized that Chenbro will continue to increase its investment in AI server development, ensuring that Chenbro will not disappoint the market. With the localization of product development teams closer to customers and the implementation of a global manufacturing localization strategy, including the establishment of the USA NCT plant and a new factory in Malaysia, Chenbro will be able to meet global customer demands more quickly and accurately. Chenbro's operational goals for this year are expected to continue growing, with AI product development as a key driver.
Wednesday 19 March 2025
BIWIN Mini SSD: Innovative storage expansion solution for edge AI era
As edge AI applications experience rapid proliferation, intelligent terminal devices, such as ultra-thin laptops, tablets, and AI edge computing terminals, are facing exponentially growing demands for storage performance and capacity. However, conventional storage solutions are increasingly revealing their inherent shortcomings:Traditional SSD expansion requires case disassembling, a cumbersome process necessitating specialized tools and technical expertise, which elevates after-sales service costs.While memory card solutions offer portability, CF cards are constrained by performance limitations, struggling to meet the high-speed read/write requirements of AI applications. High-end alternatives like CFexpress deliver comparable speeds but are hindered by limited interface compatibility and cost-effectiveness.Embedded storage options (such as eMMC and UFS) lack post-purchase expandability, making it difficult for terminal manufacturers to flexibly respond to users' growing storage needs.Addressing these challenges, BIWIN introduces the Mini SSD—a compact, modular, high-performance storage solution designed to empower terminal manufacturers to overcome the constraints of traditional storage systems and capitalize on emerging opportunities in the edge AI storage market.The BIWIN Mini SSD is featured with advanced LGA packaging processes, minimizing its dimensions to as compact as 15 × 17 × 1.4mm. Credit: Biwin01 Innovative Design for Seamless Storage ExpansionThe BIWIN Mini SSD is featured with advanced LGA packaging processes, minimizing its dimensions to as compact as 15 × 17 × 1.4mm, approximately only 15% of the area of a conventional M.2 2280 SSD, with a significantly thinner profile akin to a mobile SIM card. This drastic size reduction optimizes internal device space, unlocking greater flexibility for layout design, thermal management, and battery capacity enhancements.Building on this compact form factor, the Mini SSD introduces a pioneering "standardized slot-based plug-and-play" architecture to the SSD domain. Users can upgrade to terabyte-scale storage with a straightforward three-step process: open the bay, insert the card, and lock it in place. This tool-free physical upgrade approach revolutionizes the storage expansion experience, alleviating manufacturers from complex after-sales maintenance and inventory challenges while enabling true "storage capacity customization."Building on this compact form factor, the Mini SSD introduces a pioneering "standardized slot-based plug-and-play" architecture to the SSD dom. Credit: Biwin02 Flagship Performance, High-Speed Data Transfer Empowering AI ApplicationsDespite its miniaturized design, the Mini SSD delivers uncompromising performance. Equipped with a PCIe 4.0 ×2 high-speed interface, it achieves sequential read and write speeds of up to 3700MB/s and 3400MB/s, respectively—far surpassing typical memory card solutions and rivaling mainstream consumer-grade M.2 SSDs. This capability ensures seamless performance in high-demand scenarios such as AI model loading, 4K/8K video editing, and large-scale design software operation, fully meeting the stringent storage requirements of edge AI terminals.Flagship Performance, High-Speed Data Transfer Empowering AI Application. Credit: Biwin03 Technology-Driven Innovation for Competitive DifferentiationThe Mini SSD's core strengths stem from BIWIN's industry-leading expertise in storage solution development, IC design, and advanced packaging and testing processes, underpinned by a robust end-to-end supply chain. For terminal manufacturers seeking to differentiate their offerings in the AI era and avoid homogenization in the red ocean market, the Mini SSD presents a compelling pathway:Standardized interfaces and modular design help streamline BOM (Bill of Materials) management, reducing production and inventory costs.User-upgradeable storage capabilities minimize after-sales service demands, enhancing product lifecycle efficiency.Cutting-edge packaging and testing processes ensure stable high-frequency signal transmission and system reliability, elevating overall product quality and competitiveness.
Wednesday 19 March 2025
SK Hynix ships world's first 12-layer HBM4 samples to customers
SK Hynix Inc. (or "the company") announced today that it has shipped the samples of 12-layer HBM4, a new ultra-high performance DRAM for AI, to major customers for the first time in the world.The samples were delivered ahead of schedule based on SK Hynix's technological edge and production experience that have led the HBM market, and the company is to start the certification process for the customers. SK Hynix aims to complete preparations for mass production of 12-layer HBM4 products within the second half of the year, strengthening its position in the next-generation AI memory market.The 12-layer HBM4 provided as samples this time features the industry's best capacity and speed which are essential for AI memory products.The product has implemented bandwidth capable of processing more than 2TB (terabytes) of data per second for the first time. This translates to processing data equivalent to more than 400 full-HD movies (5GB each) in a second, which is 60 percent faster than the previous generation, HBM3E.SK Hynix also adopted the Advanced MR-MUF process to achieve a capacity of 36GB, which is the highest among 12-layer HBM products. The process, of which competitiveness has been proved through a successful production of the previous generation, helps prevent chip warpage while maximizing product stability by improving heat dissipation.Following its achievement as the industry's first provider to mass produce HBM3 in 2022, and 8- and 12-high HBM3E in 2024. SK Hynix has been leading the AI memory market by developing and supplying HBM products in a timely manner."We have enhanced our position as a front-runner in the AI ecosystem following years of consistent efforts to overcome technological challenges in accordance with customer demands," said Justin Kim, President & Head of AI Infra at SK Hynix. "We are now ready to smoothly proceed with the performance certification and preparatory works for mass production, taking advantage of the experience we have built as the industry's largest HBM provider."