CONNECT WITH US
Mar 13
AW 2026: a reality check for the fully autonomous era
Global headlines may proclaim a fully autonomous era, but AW 2026 told a different story. The show made clear that robots are not yet fully autonomous — a fact obscured by modern robotics coverage. The industry is heading in the right direction, but independent operation remains out of reach. A surprising number of robots were teleoperated via controllers. Those left unsupervised often struggled with basic environmental navigation.
The automation landscape showcased at Automation World (AW) 2026 marks a profound shift: security and safety are no longer merely compliance metrics, but the foundational layer of intelligence driving the next generation of industrial and urban environments.
A core researcher behind Alibaba's Qwen large language model has left the company and is reportedly joining ByteDance's AI research unit Seed, Chinese media reported. The move underscores intensifying competition for AI talent as China's tech firms accelerate development of next-generation foundation models.
Taiwan's leading automotive power and safety component supplier, Global PMX, has been accelerating its expansion into the fast-growing AI server market while simultaneously advancing into high-value semiconductor and smart medical products. Several new offerings have already entered mass production and shipment, and with additional overseas capacity set to come online, the company is positioning for stronger operational growth ahead.
AI agent technology is gaining momentum in China's tech sector, driven by the open-source platform OpenClaw and a trend known locally as "raising lobsters". The phenomenon is drawing attention from developers, policymakers, and industry leaders.
Taiwan-based power semiconductor packaging and testing firm GEM Services has announced advancements in its copper clip bonding technology to meet growing demand for enhanced cooling solutions in AI servers. As AI servers increase in power density, effective heat dissipation becomes critical, prompting a shift from traditional bottom cooling designs to top and dual-sided cooling products.
Lotes Terminals Industrial Co. said full-year 2025 revenue topped US$1 billion, reflecting stronger demand for next-generation server and cloud infrastructure components as AI server adoption accelerates.
Walrus Pump, Taiwan's leading water pump manufacturer, said liquid cooling for AI servers is growing, and it expects a significant increase in technology pump shipments by the second quarter of 2026. The company also reported renewed demand for residential water pumps amid rising raw material costs and low channel inventories.
The US revoked a draft rule on March 13 that would have required the country's approval to export US-made AI chips anywhere in the world. The withdrawal marks a reversal of one of the Trump administration's most significant chip export strategies after ending a regulation inherited from the previous Biden administration last year.
At this year's Embedded World (EW) exhibition, most exhibitors focused on critical solutions and specific components. In particular, there was a large focus on visual sensing, the "eyes," and robotic arm control, the "hands."

Nvidia's annual GTC conference opens March 16 (Pacific Time), with CEO Jensen Huang returning to the nearly 20,000-seat SAP Center in San Jose for a keynote outlining the company's latest advances across the full AI stack. The presentation will cover accelerated computing, AI factories, open models, agentic systems, and physical AI, while signaling the direction of AI infrastructure over the coming year and influencing technology roadmaps across the global semiconductor and server supply chains.

Amazon Web Services (AWS) and AI chip startup Cerebras Systems said they are working together to bring a high-speed AI inference architecture to Amazon Bedrock, a managed service for building generative AI applications. The companies said the system, expected to launch in AWS data centers in the coming months, will combine AWS's in-house AI chips with Cerebras hardware to accelerate the execution of large language models (LLMs).