CONNECT WITH US
Face to face
Interviews with executives in the supply chain
IN THE NEWS
Thursday 14 November 2013
Focus on enterprise SSD: Q&A with OCZ CMO Alex Mei
OCZ Technology shifted its corporate focus away from legacy DRAM memory modules in 2011, and has since built on its expertise in high-speed memory to become a leader in the design and manufacturing of solid state drives (SSDs). The year 2012 was another year of transition for the company, which moved to reduce its reliance on the consumer market and increase focus on delivering high performance, client and enterprise solid state storage solutions.Alex Mei, chief marketing officer (CMO) at OCZ, talked about the company's new products and recent achievements, and shared his views on the SSD market, in a recent interview with Digitimes.Q: How did OCZ and the storage market as a whole perform over the past year?A: The SSD market matured greatly over the last year as more and more customers across both the client and enterprise spectrum adopted solid state technology for use in everything from ultraslim notebooks to the data center. Unlike many other product segments, the SSD market has continued to grow as storage and data processing continue to drive demand.Last year was a year of transition for OCZ as the company exited a number of markets that have become highly commoditized; for example, the value SSD market where price is a primary factor. Rather than compete in the highly price-oriented value market, OCZ focused on introducing new differentiated products that leverage our own in-house IP. Examples of this include our Vector and Vertex 450 series drives which make use of OCZ's proprietary Barefoot 3 controller. The company also continued to push into the enterprise and shifted the mix between client and enterprise products.Q: What is your direction on the enterprise market? Do you see growth in PCIe?A: Today SATA continues to represent the largest portion of client and enterprise SSD sales for OCZ, but we continue to see not only opportunity but growth in PCIe-based SSDs. On the enterprise server and data center storage front, PCIe and SAS continue to grow as slots are available and PCIe delivers the highest performance and density. OCZ not only offers PCIe-based edge cards that are ideal for tiered and primary storage; we also have focused on combining the hardware with software for a complete solution. For example, our ZD-XL SQL accelerator is easy to deploy integrated hardware/software storage solution that accelerates and optimizes Microsoft SQL Server database applications in enterprise environments.Q: What about workstation class PCIe, like your current RevoDrive series?A: Client and workstation PCIe adoption also continues to increase as customers are looking for faster and higher density solutions that offer improved bandwidth over what traditional SATA interface and drives can provide. We are finding that the RevoDrive series has become popular among workstation users, enthusiasts looking for the highest transfer rates in high performance desktops, graphic designers and customers that put a premium on high speed processing when creating digital content. Moving forward you can see us continue to expand the RevoDrive series utilizing our own controllers.Q: Can you talk about the client market and what it means for OCZ?A: Though we have put a great deal of emphasis on growing our enterprise business, we have also remained committed to the client market.Rather than sell products in the value segment, our product line now focuses primarily on the high-end and enthusiast space where we can differentiate and add value. Because OCZ has in-house controller and firmware technology, we are able to design drives that provide superior performance and features with the latest NAND. We are always looking for ways to improve performance, reliability and endurance.For example, our flagship Vector line was designed to address both high performance and workstation applications so we focused on delivering superior sustained performance. While some competing drives perform great when they are fresh-out-of-the-box, that performance degrades quickly once the drives are in a "dirty" state. The Vector gets consistently faster sustained speeds with the complete spectrum of file types and sizes, including both compressible and incompressible data for balanced, long-lasting performance so that customers enjoy a superior overall computing experience over the long term. OCZ will continue to introduce high performance client drives whenever we can add value for end users.Q: You launched new drives (Vector and Vertex 450) based on your own in-house controller and firmware. How are these products being received by customers?A: Very well. Unfortunately over the past year we have had some NAND supply issues that have impacted our ability to provide some of the higher capacities, but in terms of performance and features, both these award-winning product lines have been utilized by customers in everything from gaming rigs to the latest mobile platforms. The OCZ Barefoot 3 controller and our in-house firmware help make the Vector series among the highest performing client drives on the market. Because we have this proprietary technology we are able to leverage the latest NAND types, including 19nm in our upcoming Vector 150 series, which is designed to shatter performance and endurance barriers once again.Q: Can you tell us more about your enterprise SSD strategy?A: Our enterprise strategy is to deliver superior value and features to our customers across a wide range of solutions. I say "solutions" because we are selling so much more than just enterprise SSD hardware.While we offer traditional SATA, SAS and PCIe enterprise SSDs, we also provide enterprise software under our XL series that address everything from virtualization (VXL software) to database (ZD-XL SQL accelerator) and enterprise storage central management (StoragePro XL software). Together our enterprise hardware and software represent a complete solution that enables enterprise customers to get the most out of their flash-based storage.OCZ will continue to develop plug-and-play solutions like our ZD-XL SQL accelerator that address specific applications, making it easier than ever for storage architects to deploy and start realizing the benefits of enterprise SSDs.Q: Can you talk about the partnerships OCZ has established over the past year? And how they will contribute to company growth?A: On the partnership side, OCZ continues to establish strategic partnerships on both the supply and distribution sides of the business.The ability to develop our own controllers has enabled the company to be flash agnostic, allowing us to work with a wider range of NAND providers which improves availability of high-end flash. We have built up strong partnerships with the fabs that have helped fuel our enterprise data center growth, ensuring high quality and supply.To make our solutions more accessible worldwide, OCZ has partnered with new distributors, like TechData for example, that allow us to better support the VAR channel. All of this has helped OCZ grow our enterprise and client SSD availability and reach.Q: Can you share your views about the outlook for SSD? What is your business outlook for 2014?A: The SSD market will continue to grow in terms of units sold as well as the number of devices that come integrated with this technology.Previous objections in the enterprise like capacity and endurance are becoming less of an issue as the controller and firmware advancements help mitigate the issues with die shrinks, which at the same time help reduce cost and improve TCO for customers. At the same time we realize that while the SSD market continues to mature it becomes even more critical for us to differentiate our products, both through improved performance and feature-set.SATA will continue to be the majority of the SSD market, but in 2014 we can expect to see rapid growth in the PCIe market, especially in the enterprise. For these reasons OCZ will continue to invest in developing next generation controllers that provide native support for these key interfaces and continue to work on improving access to the latest cutting-edge NAND. The SSD market continues to heat up and this is an exciting time for customers as the technology is now becoming much more mainstream and will improve everything from immersing yourself in the latest game title to accessing cloud-based applications.OCZ CMO Alex MeiPhoto: Company
Friday 18 October 2013
GKB launches its HD-SDI solution - A one-stop shop for everything; 8 questions to clarify any doubts about "HD-SDI" products
HD SDI solutions have only been introduced in the market in th past 2-3 years. Most buyers in the market are still not familiar with this new solution, leaving many questions and doubts of what benefits HD-SDI can bring to them. This situation is the same as it was five years ago for IP solutions. There is no doubt the HD SDI market will grow in the upcoming years due to its outstanding performance and features. Below, GKB answers the eight most frequently asked questions about HD-SDI camera and HD-SDI DVR.Q: What is HD-SDI?A: HD-SDI stands for High-Definition Serial Digital Interface, a kind of video interface run by SMPTE that uses a coaxial cable to transport uncompressed digital video. HD-SDI is the upgraded version of traditional analogy surveillance. It's simple and easy to learn as it continues to adopt analog technology and coaxial cable but advances its image to high definition resolution with no latency.Q: What is the standard of real High Definition resolution?A: After many discussions and divergences that prevented a unified HD-SDI solution, the market defined the final standard of HD-SDI resolution with FULL HD 1080P/30-60 fps.Q: What is HD-SDI's signal interface and format standard?A: Please refer to the table below. Standard SMPTE Name Bitrates SMPTE 259M SD-SDI 270Mbits, 360Mbits, 143Mbits, 177Mbits SMPTE 292M HD-SDI 1.485Gbits, 1.485/1.001Gbits SMPTE 424M 3G-SDI 2.97Gbits, 2.970/1.001Gbits HD-SDI VS AnalogQ: What is the signal sources of HD-SDI and Analog, respectively?A: Analog DVR is an analog video signal that is transmitted over coaxial cable. However, HD-SDI DVR is not an analog signal although the signal is also transmitted over the coaxial cable. It is a digital video signal and therefore turns out to have a much higher resolution than analog.Q: What is the maximum supported cable length for Analog and HD-SDI?A: The maximum transmission of Analog DVR is 400 meters; HD-SDI DVR can reach 120-140 meters, therefore HD-SDI has a less competitive edge in large scale deployments.Q: What type of cable does HD-SDI use?A: HD-SDI uses standard coaxial cables (same as analog systems), and we highly recommend our customers use RG59 or RG-6 with 95% braided copper shielded coaxial cables to ensure the 3G-SDI transmission is able to be delivered smoothly.Q: Does HD-SDI with uncompressed raw data need more storage?A: Uncompressed raw data from HD-SDI cameras to HD-SDI DVRs leads to an assumption that HD-SDI solutions require much more storage. However, this assumption is incorrect, as video data is compressed not in the HD-SDI camera but in a DVR. DVR has two compressing options: H.264 High Profile and Main Profile respectively, in which High Profile compression has one third more storage compared with Main Profile compressed storage.Q: Does the HD-SDI transmission distance limit the scale of installing the project?A: Although the maximum HD-SDI transmission distance is 100-120 meters, HD-SDI provides accessories (HD-SDI Converter and HD-SDI Repeater) to sort out this issue. Regarding the scale of the project, HD-SDI Repeater is used in between during transmission to make HD-SDI transmission distances extend to 500 meters. The distance is long enough to meet most customers' requirements. SDI to fiber optic converter can reach a further transmission distance thus there is no worry about transmission distances, as HD-SDI with accessories can meet any scale of projects.Q: Is HD-SDI able to deliver the immediate image with no latency?A: Yes, the major advantages of HD-SDI are zero compression and no latency transmission.Q: Does GKB provide the total HD-SDI Solution?A: Yes, GKB HD-SDI Solution provides One-Stop Shopping for everything! Our HD-SDI includes various housings and features of 3G-SDI cameras, SDI DVRs, HD-SDI monitors and converters and repeaters to satisfy all customers' requirements.Q: Does GKB offer the complete compatibility of HD-SDI products?A: Yes, Hybrid DVR is compatible with Analog and HD-SDI cameras. GKB HD-SDI DVR can be seamlessly compatible in traditional analog systems to upgrade to an HD-SDI system renovation project, without re-wiring. GKB HD-SDI DVR also offers CMS management software that manages HD-SDI DVR and Analog simultaneously.Q: Are GKB HD-SDI solutions compatible to other brands?A: GKB HD-SDI Camera/ DVR have the same Video Mode (1080P:30FPS, 20P:60FPS) that can be compatible with other HD-SDI brand products.Q: Will HD-SDI integrate with IP solutions in the future?A: The HD-SDI market is in the beginning stage, thus there are few companies that integrate IP and HD-SDI systems. For the long term trend, we know it's essential to integrate IP solutions and HD-SDI systems. Thus, GKB will invest time and to integrate IP and HD-SDI systems.Q: What are the advantages of GKB HD-SDI?A: Aiming to construct a safer environment with little cost, GKB is proud to launch its true full HD-SDI camera with a proprietary HD Hybrid DVR series! Below, we provide the advantages of the GKB HD-SDI solution. Now you may enjoy the amazingly clear image and smooth transmission with the latest GKB technology.A one-stop shop for everything:*GKB with completely HD-SDI Camera line -3G SDI which supports multi SDI Mode (1080p/1080i/720P). -Professional OSD functions: 3DNR/Sense-up/Digital zoom and so on. -True 120 Meter SDI distance. -Long, middle and short range IR Bullet. -Outdoor PTZ, Zoom Cam*HD-SDI DVR with completely product line and rich function. -Providing stable & quality 4/8/16CH SDI DVR which seldom seen in market. -1080P Real time HD in recording & playback. -Panic Button/ Alarm set/POS support/Free DDNS Server*HD-SDI Accessory-SDI repeater provides power to further repeaters.-SDI HDMI Convertor; Optical Converter-SDI monitor: SDI/ HDMI/VGA/BNC output with Full HD image quality.
Friday 28 June 2013
Blue-collar processing: Q&A with Tensilica founder Chris Rowen
Based in Santa Clara, Tensilica has been around the semiconductor industry for around 15 years, providing customers with what it calls configurable dataplane processors (DPUs). The company has more than 200 licensees worldwide and is approaching 2.5 billion cores in the market but in March 2013, the company entered into an agreement with EDA provider Cadence to be purchased for US$380 million.Just after the purchase, Digitimes had the opportunity to sit down with Chris Rowen, chief technology officer and founder of Tensilica to talk about the acquisition and what Tensilica brings to the table with its technology.Q: Very briefly, what is Tensilica's role in the market?A: What we are able to do is complement standard control CPUs and add significant opportunities for customers to differentiate their products, be more flexible and add value in terms of new algorithms, and we can do that in minutes rather than in months. Our focus is handling computation on the most critical data, whether it is images in a camera, audio in a multimedia device, or wireless communications. If you look at semiconductor companies, a majority of those focusing on smartphones and a great majority of those focusing on digital TVs are likely to be using our technology. As a company we have become one of the major suppliers of processor technology.Q: How did the interest from Cadence develop?I think it is the combination of two things that has made Tensilica so attractive to Cadence.First of all, we have been able to develop a very unique processor technology as I previously mentioned. The second area is our deep architectural and market knowledge in key domains, especially around baseband, audio and imaging. When it comes to system architects and the software ecosystem, we are able to get into the dialog and discussion at a very high and very early level to help customers figure out where they are going with their product lines and what kinds of fundamental technologies they are going to use.That ability to engage with customers high and early changes the nature of the relationship between vendor and supplier and when Cadence looks at that, it recognizes that it really wants to change the way it engages with customers as well. It is not just about better tools but being able to have a seat at the table for key decision making or to be invited to discuss what is going on. So while Tensilica is far smaller than Cadence, a much larger proportion of our activities fit into that strategic discussion about architectures and applications.Q: Can you talk about technology synergies?A: Let me explain by giving an example. If you look at an SoC, there are a lot of different things going on. There is a host CPU that runs some high-level applications like the operating system and user interface. But it is not terribly efficient, so it is increasingly common for chip designers to implement other kinds of processing or computation to handle things like voice processing, audio, video, vision, baseband and for other customized applications. Then there is a need to interface the device to the outside world, whether it is for flash, DDR, analog front ends for different kinds wireless interfaces, network interfaces, PCI and USB.What Tensilica brings is a mastery of processor hardware and software and engagement in key applications, Cadence brings a rich portfolio of complementary IP, particularly in interfaces - the analog and digital interfaces that connect with radios, USB, PCI, flash and DRAM. Those strengths represent all the things that form the boundary of the device, while we are providing more of the guts of the SoC.And our combined focus has most notably not been on the CPU. We've steered clear of the general purpose CPU market because that is much more about legacy. ARM (in RISC) and Intel (in x86) have taken a strong position in those respective markets and remain the dominant general purpose CPU architectures. However, the key takeaway here is that it is in most of the other areas I mentioned where differentiation takes place.And it's not just a niche that we've carved, but a broad territory of what you might call blue collar processing - all of the heavy lifting that is at the heart of applications such as imaging, vision, communications, networking, storage, audio and voice and we are a leading supplier in those areas.Q: Can you talk about your processor compared with a general processor? Is there a lot of overlap in what they do?A: MIPS and ARM overlap a lot, but with Tensilica there is much less overlap. I guess if you look at it one way, our underlying technology has a strong element of RISC processor in it so theoretically you can use Tensilica processors configured as general processors. But that really underplays our capabilities so we've never particularly emphasized that aspect of our technology. We think it is much more important that our dramatic extensibility and parallelism allow us to do so many things ARM cannot do. It is routine for our high-end processors to do the equivalent of 100, 200 or 300 RISC equivalent operations per cycle whereas for ARM it is typically a discussion about whether to do 1, 2, 3 or 4 general purpose RISC operations per cycles.Now there are some caveats that go with that. While it's true our processors can do more things at a time and more things at less power - because it does them in parallel - many applications don't require hundreds of operations to be done. But in tasks like imaging, where you can work on all the pixels in the image at the same time, it is possible.So you always need two ingredients - a processor that is capable of doing things in parallel and a problem that naturally exposes, and has a high degree of parallelism in the nature of the task itself. We're masters at finding those applications and then coming up with processors that can exploit the available parallelism in the application. That is why I referred to what we do as blue collar, because the focus is on applications where heavy lifting in involved (multiple operations at the same time), such as in imaging, audio, storage, security and network protocol processing.That kind of parallelism doesn't really apply when running the sort of general purpose code which ARM focuses on. The applications I just mentioned are much different from what you find inside the code of Angry Birds or for the Android operating system (OS) and one neither expects nor needs an ARM processor to run at that level.Q: Can you walk us through an example of how a customer would decide that it should implement Tensilica technology, say for an imaging application?A: Let's take a case for a hypothetical customer - a chipset maker targeting smartphones. Whether it is making chips for its own smartphones or one for the general public, the same challenges are there - different types of functionality need to be integrated while making sure the product stands out to differentiate it in terms of the features, because it is the features that the end customer is going to be most passionate about.For example, these days camera functions are critically important for smartphones. Many smartphone makers today recognize that having superior imaging in quality, resolution and richness of features - whether it is face detection and tracking, high dynamic range (HDR) photos or handling specialized low-light conditions - are things that will help sell the phone.Moreover these features are increasingly migrating from a focus on still image functions to continuous video. This is really the key divide, because if it is just a still image function, the application may be run on the main CPU, but to do video requires that the processor run at high rates continuously. Now, while there is a fair amount of computing power coming from a lot of ARM processors on the market, the issue is more of an energy problem. As one leading smartphone company described it to me, a little known fact in the market is that if you took a quad-core 1.5GHz ARM processor and actually ran the cores all together for any period of time the phone would overheat in about 20 seconds, maximum. There is a lot of peak computing power, but it can only be used for a sprint.But image processing, particularly as it moves to video, is not a sprint. It becomes a marathon. So the question is how long and how fast you can go. The gap between the performance you can afford to power when running on a general purpose processor, and the performance that you really want to have to do these video functions differs by a factor of 10x or 20x. You can do a little bit better with a GPU because it is a little bit more efficient than a general purpose CPU for image processing, but it is still probably 3-5x less efficient in terms of fewer ops per watt than an optimized imaging platform.What you would really like to do is turn off the CPU and GPU and turn on the image processor during those key periods in order to get that high throughput, but you still want to make sure you have the same programming model and the same ease of bringing the applications onboard.Q: What about hard coding as an alternative?A: One of the other potential alternatives to do imaging is to have it completely hard wired, meaning each function has a different block of IP and a different bucket of gates on your chip addressing it. That has worked reasonably well for applications like the simple standard imaging signal processing pipeline, which takes a pixel value off of an image sensor and sort of massages it and improves it in order to get sort of an OK image from the crude image that came raw off the device. However, that model works because image sensors are fairly similar and what you want to do with the images is fairly similar.However, with video there has been so much innovation taking place using more sophisticated processing. For example, using temporal information and spatial information when catching a sequence of frames and using that information from frame to frame to make each frame look better than it would have when viewed in isolation. Another application is one where you want to start extracting and processing image content information, whether it is facial features or gestures. Those examples are not at all suitable for putting into hard wired logic.Q: The reason being?A: Even if it can be expressed in hard wiring you probably don' want to go that way. Doing so would typically have you freezing the definition of the algorithm one to two years before you want that product in the marketplace. So your ability to anticipate and know what is the best possible algorithm, and what is the best and right set of features deployed in the phone is extremely limited.There are some functions that are governed by standards where you can do that. The H.264 standard has been around for about ten years and it is going to be around for another ten years, It is not a moving target and people doing completely or partially hard wired implementations have done a pretty good job with H.264.But when it comes to other functions in gesture, image improvement, and other vision applications, these areas are not governed by standards but by competition in the marketplace. Dozens of independent software houses as well as in-house imaging teams at the smartphone companies are constantly competing and coming up with the next new version of the application, so they need a platform that is good for imaging but is flexible enough to accommodate all of these different kinds of algorithms.Chris Rowen, chief technology officer and founder of Tensilica
Friday 21 June 2013
Reed switches and MEMS: A conversation with Coto Technology
One of the oldest companies in the electronics industry, Coto Technology has been designing and developing small signal switching solutions for over 90 years. These days, the 93-year old company is a major player in the automatic testing equipment (ATE) industry where it provides reed relays for testing devices. While reed technology predates the digital age, Coto has also made moves in one of latest growth areas in the semiconductor industry, MEMS technology. Earlier this year, Coto made a splash when it announced the availability of what it claims is the smallest MEMS-based reed switch available on the market today.During the Globalpress Electronics Summit 2013 in Santa Cruz, Digitimes had the opportunity to sit down with Stephen Day, VP of technology, and Bill Gotschewski, VP of sales and marketing at Coto Technology to find out more about Coto Technology and their ability to develop a MEMS reed switch with a footprint of less than 2.5 mm2.Q: Before we discuss your MEMS switch, can we touch on reed switches. Reed technology (switches and relays) has a bit of elegance to it because of its combination of simplicity and historic staying power. Can you tell us a bit about that history?A: The reed switch was invented in 1936 by a researcher at Bell Labs named WB Elwood. Elwood basically took a piece of glass tubing and put two soft nickel/iron magnetic blades inside and fused them to the tube. He then had nitrogen blown into to the tube to provide a clean, inert atmosphere and the whole thing was hermetically sealed. Precious metals, usually ruthenium or rhodium, are now used on the contacts to make them last longer. They used to use gold but gold is too sticky.The way reed switches work is there is a tiny gap between the two blades and if you bring the switch close to a magnetic source, such as a magnet or coil (to make a relay), the two blades are induced to north and south poles and attract, coming together to complete a circuit.Despite their simplicity, for their size reed switches can switch at high power. The movement of the blades is also so far inside their limit of elasticity that they can close literally billions of times. So reed switches have enormous lifetimes. Moreover, reed relays are enormously reliable because they are sealed hermetically, compared with electromechanical relays that are affected by the outside atmosphere. And they are not prone to damage from electrostatic discharge, unlike some solid state switches.We've made switches tested to five billion mechanical cycles without failure. Now, if you start to flow current through the switch, the amount of watt power will affect the lifespan to some extent. For a 5V 10mA load, the life cycle is about a billion cycles but that would drop to 100 million cycles for a 5V 100mA load.Q: Historically reed relays were used in telecom, but not so much anymore. What are the main applications for reed switches?A: We should first explain the difference between a reed switch and reed relay. A reed switch is a standalone device that can be operated by a magnet, a current-carrying coil, or a combination of both. A reed relay combines a reed switch and a coil into one component.Reed switches are used in enormous numbers as sensors in areas such as alarm systems and medical devices, among other applications. One of our principle applications has been to wrap a coil around the switch and make it a relay for use in automatic test equipment (ATE) solutions or anywhere you need to switch a large current with a small current. They are like a power amplifier in a way.These days in the ATE industry, each tester has 10,000-20,000 relays inside and the system may go down if just one relay fails. So the number one objective is reliability. You have to be switching at 500 million to one billion cycles, which requires enormously high reliability on each individual piece - or an overall reliability rate of 99.999%. We have really focused on super-high reliability, and over the past 30 years we have dominated in the ATE space. This is the area where we have hung our hat, testing anything from Apple iPhones to the next Intel processor in a range from high precision to high frequency.Q: The ATE industry is still using glass solutions?A: The glass solution has lasted from 1940 until now but the technology is hitting a wall. Over the years, the industry has wanted to get more throughput by including more channels and higher densities in the testers. For example, if Foxconn wants to test more Apple iPhones in a 15-minute period it will look for smaller and smaller solutions.Unfortunately, we think there are fundamental physical limitations being reached where you can't make a reed switch any smaller. The way reed switches are made, a lot of heat is needed to fuse the glass. If you make the switch too short, the heat travels by thermal induction down the blade of the switch and it destroys the precious metal coating.In 1940 reed switches were 50mm long, now they are down to about 5mm and that is about the practical limit. If you include the length of the wire, realistically the device ends up being about 7mm long.So if there were 100% reed switches in a system 15 years ago, it is more like 30% today. MOSFETs have kicked in as a replacement, as have electromechanical solutions. But if you ask an engineer what would be the preferred solution, the answer would absolutely be a reed.Q: Is this what has led you to developing a MEMS solution?A: Based on our industry perspective, we understood that there would be continued strong demand for a magnetically operated reed switch that is much smaller than existing types, that can handle similar electrical switching power, and that can be attached to a circuit board by surface mounting. But it still needed to retain the benefits of reed technology. MEMS was an ideal fit.So about six years ago we met with a company called HT Micro, a MEMS and microfabrication specialist located in Albuquerque, New Mexico. Management at HT Micro basically all worked at Sandia National Laboratories previously, doing military impact switches and nuclear device detonation switches. Thank god for all of us that was not a big growth market, so they were interested in joining forces with us to develop more mainstream products. That is how we got started. We have since set up a joint venture called RedRock to develop the technology.HT Micro has its own fab, which is very important for being able to control production. These are not manufacturing processes that are amenable to conventional semiconductor foundries.Q: Can you talk about the MEMS reed switch you recently announced?A: What we have done is develop a new type of reed switch based on high aspect microfabrication. The switch maintains the desirable properties of conventional reed switches - high current carrying capability, hermetically sealed contacts, high resistance to electrostatic discharge (ESD) and zero power operation, in a package about one-tenth the size of the smallest available reed switches.Instead of using blades, our MEMS reed switch has a metal cantilever that bridges two isolated metal blocks that act as magnetic field amplifiers. There is a small gap between the cantilever and one of the blocks and when magnetic flux from an external magnet builds up in the gap, it pulls the cantilever into electrical contact with the block. Much like traditional reed switches, the contacts are coated with Ruthenium.Q: You say your switch is the smallest MEMS reed switch in the market. How have you been able to achieve that?A: We use what is called high aspect ratio microfabrication (HARM) instead of planar MEMS. From our experience most switch users are much more concerned about footprint of the switch (PCB real estate) than they are about height. In traditional planar MEMS, the blade is electroplated on top of a base substrate, and then a layer under most of the blade is etched away, freeing up the blade so it can bend. But making thin, wide blades the planar MEMS way by using conventional electroplating is difficult and if you try to maximize the cross sectional area of the blades by plating them wider, it increases the footprint.Using HARM, the blades are grown by electroplating, but they are grown edge-on, and vertically relative to the switch substrate. That way, we can make them as high as we want without increasing the footprint of the switch.Another thing about HARM, is that it produces switch structures that generate closure force that is much greater than that shown by previous MEMS-based magnetic switches. This enables hot switching up to several hundred milliwatts. The high retract forces in the switch when it opens also prevents the switch from sticking shut during hot switching or after long closure periods.This is important because while some customers are looking for a switch to perform hundreds of millions of cycles, others need the switch to sit for almost two years and then be used once. This is very important, for example, in applications used in the medical industry.Q: Your products are not priced to target the mass market, such as for smartphones. What are some other possible applications? Is the target market the ATE industry?A: The MEMS reed switch can be used anywhere you need higher power in a small space because the switch dissipates the power very efficiently. In areas such as robotics and sensor applications, the MEMS reed switches are ideal as actuators. Other spaces where the device would be ideal is where low power or no power activation is required. For example battery sensitive applications like hearing aids. A lot of 70 year old guys don't want to always be replacing the battery in their hearing aids.In the ATE industry, our focus will be on a MEMS reed relay, which is being developed in parallel to our MEMS reed switch. This product will come in the future.Stephen Day, VP of technology, Coto Technology
Friday 14 June 2013
Lose the switch, lose the loss: Cavendish Kinetics leverages MEMS for tunable RF components
Cavendish Kinetics recently announced the availability of production samples of its tunable RF capacitors to key strategic partners. Shipped as a chip scale package (CSP), the Cavendish digital variable capacitor (DVC) technology is used to tune antennas, power amplifiers and filters to improve RF connection quality and signal strength. Moreover, Cavendish leverages MEMS technology to manufacture the high-performance, tunable RF components.During the Globalpress Electronics Summit 2013 in Santa Cruz earlier this spring, Digitimes had the opportunity to chat with Dennis Yost, president and CEO of Cavendish and Larry Morrell, executive vice president, marketing and business development, at Cavendish about the issues the company was addressing in the market, the technology Cavendish was looking to bring to market and the value proposition its MEMS solution provided.Q: What is the issue in the market that you are trying to address?A: The mobile handset market has continued to move forward, progressively going from 3G to 3.5G and now to 4G. And as the technology progresses to higher platforms, one challenge for system designers is to look at ways to improve connectivity, especially for transmitting data very quickly, because we all want to access more data, watch more video and do more things on the Internet. The thing is, the modulation schemes to do that require a higher signal-to-noise (SNR) ratio than just a plain voice call. So as data becomes more important, the quality of the radios becomes more important.Unfortunately, the radio part of the phone is becoming less and less efficient compared with what the modulation scheme should be able to give. If you look at 4G, you should be able to transmit 80Mbs but users actually see only 10% of that on a good day. Moreover, users at the cell edge (between cells) see even worse performance than that.Our focus is looking at improving that antenna from 5-10% or maybe 13% efficiency transmission of energy to being 30-40% or maybe even 50% efficient in transmitting energy.The result of this improvement is that you can save power on the transmit side and improve sensitivity on the receive side, so users will have a better experience. Battery life can also be extended because users don't have to transmit as often at full power and the power amps will not heat up as much.Q: Can you explain in more detail some of the issues facing front end module design and antenna design with the transition to LTE?A: In the 3G and 3.5G markets, you traditionally have been able to get a pretty decent world phone that covers frequencies from 800MHz to about 2.2-3GHz. That is a pretty good phone for 3G and for that, the antennas used were just good enough.With 4G, frequencies are being added to both ends of the spectrum, so basically you are expanding the frequency range you have to cover to 700MHz to 2.7GHz. Now add to that the white space that is available - which is the digital dividend that comes from moving terrestrial TV from analog to digital. The US and Europe are talking about adding the 600MHz bands, which is going to make things even more difficult. In terms of bands that have already been approved by 3GPP, they now span from 698MHz to 3.5GHz. And while there is no 3.5GHz deployed, there are companies out there seeing if they can make that work. So there is much more spectrum that the antennas need to cover.Moreover, antenna makers are not consulted when new phones are designed, and there really is no interest in doing them any favors when it comes to improving RF design. In fact, the very exact opposite is being done. Consumers don't want an antenna sticking out of their phone and nobody wants a small screen so antennas are becoming smaller and required to deal with more noise. OEMs also sometimes simply stick a connector right in the middle of the antenna or add speakers or buttons that interfere with the workings of the antenna. Antenna makers are then given impossible specs to meet and are expected to deliver in a short time anyway.The antenna makers are the tail end of the dog and they would be more than happy to change the way they approach the problem.Q: Aside from the difficulties of having optimal RF design in mobile handsets, it seems you are arguing that there is a problem with tuning RF signals in general. Why does this occur and how does your technology address this issue compared with what is currently used in the market?A: If you want to tune an RF signal. One way to do it, and people have been doing it this way in different forms for a number of years, is to have a multi-throw switch that is attached to different values of load - imagine a one-pole 32-throw switch with each of those 32 switch elements attached to a different RF load, be it an inductor or capacitor or something of different value. So you have the power loss of the switch and the loss of whatever the passive component is. But you get very good tuning out of that and very good tuning capability.Unfortunately, the switch consumes some of the RF signal by virtue of the fact it has resistance in it. So the power loss of the switch frequently sucks up all of the efficiency gains you can achieve elsewhere, because the switch itself has 1 ohm or 1.5 ohms of resistance. That may sound like a good low value for a switch but if you lose an ohm in the switch, you lose 3dB overall, meaning about half your signal is going out the front door. So, 1 ohm resistance in your switch is basically a killer. And that is what handset makers have to live with.Our solution is to take the switch and throw it away. Our device allows for the RF signal to connect directly across a shunt capacitor, which is one of the ways you can do a load. And if you have a capacitor where you can change its value, as opposed to needing a switch, the losses of the switch can disappear. Lose the switch, lose the loss.Q: How does it work?A: We use MEMS technology for RF. We make a movable component in our technology. Image a parallel plate capacitor and as the plates move closer together they have high capacitance and as they move apart they have low capacitance. It is kind of a bi-stable capacitor.MEMS is ideal because you eliminate all the non-value added parasitics, meaning if you were to have a switch in a series - and just be switching capacitors in and out - the resistance loss in that switch is a parasitic that you have to live with. With our component you don't have that. It is just a capacitor that changes a capacitance state. So there is no parasitic, which in this case is called equivalent series resistance (ESR).Q: You compared your platform to only one example of a switching solution, but there are other companies addressing this market. How do you compare with them?A: Anyone who is doing switches can address this market, and there are a number of different ways it can be done. One solution is to use discrete switches like gallium arsenide (GaAs) and put in discrete components. You can also integrate these solutions using solid state switches. You can build up a relatively good solid state switch with SOI technology and some companies are doing that quite successfully to address the switch market as a stand-alone market.However, you have the same issue in all of these cases. If you have a switch in there, even with a very efficient capacitor, you still have the loss related to the switch. There are a number of companies in the switch market trying to do the same kind of application. Unfortunately they always have a switch - because architecturally they can't get rid of it. The reason is that their capacitors are all fixed plate capacitors. They don't vary like a MEMS capacitor can, which is intrinsically a variable capacitor.Q: You didn't mention any MEMS competitor in your comparison. Are you the only company addressing this issue with a MEMS solution?A: Companies have been trying to implement this solution in MEMS for quite a while because MEMS gives the best performance. This idea is nothing new. Research has been going on for probably around 30 years now. The problem and challenge people have had with MEMS is whether can you make it reliable, can you make it in volume and can you make it at a cost and size that makes it a viable solution for a cell phone maker?Fortunately, we can meet all those requirements. Other companies may talk about using MEMS in RF solutions, but with them you are talking about US$5 and US$10 parts. There is not enough BOM in cell phones to do a US5$ switch.Q: So what is the pricing of your MEMS solution?A: Let's just say that if you look at current designs in the market for LTE, one solution is to run multiple antennas connected with switches. If you eliminate those antennas and you eliminate the switches you can save a dollar or more with our solution.Q: Application processor companies such as Qualcomm have also announced solutions that improve the performance of the RF front end. How do their solutions differ from your approach?A: Companies that have access to what we call the interior of the radio, or the other side of the antenna - where you are talking about the switches and PAs and so forth - use something called an impedance matcher. The impedance matcher is designed to convert the 50 ohms that the RF uses inside the phone to the free space that the antenna sees. So you have this conversion zone, and that conversion is done by an impedance matcher, which can be re-tuned as you change frequencies so the antenna works as well as it was originally designed for at multiple frequencies.Now, while that is all possible to do, you unfortunately haven't changed the efficiency of the antenna by doing that. You simply made it work the way it was designed to work - across multiple frequencies. This can deliver a performance improvement of 10-20% or even 30% for some extreme cases.However, since it is in the signal path, the losses from the switch remain, so you still have to recover those losses, which would be in the 1-1.5dB range. Therefore, the gains have to be above that to show a net gain, which is turning out to be extremely difficult to demonstrate.This type of solution can be done with a variety of different architectures but they require multiple components and end up being much more complex circuits to control, because you need several variable elements which have to be traded off against each other. There are literally hundreds of thousands of combinations that have to be evaluated. It is very complex design task with marginal results.Our belief is that if you have a lossless component on the antenna, then let's make the antenna do the job of becoming more efficient itself. It actually improves all your cases. You can add an impedance match onto that if you want, but then you're still kind of working on the wrong end of the problem.Q: What is the current status of your MEMS switch?A: We just announced the availability of production samples. Before that we were making sure it was a highly efficient design and that it has a high Q factor (a higher Q indicates a lower rate of energy loss). Our measurements with antenna makers show that our Q in actual usage conditions over the normal usable range of the device is in excess of 200. This compares with a Q of 40-50 for devices that use switches, and that represents a good number for them. Those losses are just being tossed out the front door. That is why we are able to improve the antenna efficiency by a factor of two or more.Q: How has the response been so far from potential customers?A: The response to the technology has been overwhelmingly positive. The way we demonstrated our technology was by buying commercially available phones and working with antenna companies to retrofit the devices. We didn't pick any particular method but let them choose their own style of implementation. We provided them with some early parts and they reported an improvement of 1-2dB, and in some cases 3dB over existing antennas that were already in production. This was done without the benefit of going back and re-tuning the industrial design. It was a very quick and dirty retrofit.These results were also well received by the handset makers who are now waiting for us to come back when we are in full volume production. That is the process we are in right now. We expect that later this year we will be announcing design wins and big vendors adopting the technology.Q: It is not always easy for startups to receive funding, despite any amount of "Wow!" their technology may have. As a semiconductor startup are you finding it difficult or easy to find funding?A: Finding investments for semiconductor startups over the past few years has really been a challenge. There is less and less money available for the traditional startup, meaning those with a business model of designing a better CMOS chip than everyone else and going to a foundry to build it. The investment community has been looking for startups that own a unique technology platform for offering differentiated products. Fortunately for us, this has been what we are able to do.We have been focused on the RF component market since late 2008. For the first couple of years we were a technology development company. Now that we have the technology, we are focusing on going to market. From a business stance, this has made us very attractive to investors because it has taken us more than four years to get where we are now, with the main reason being that the technology barriers are so high, so it is not easy to copy. Our investors are extremely pleased with this direction and also that we targeted a mobile handset market that is big and still growing. Billions of these devices are going out the door so the market is pretty large for us.Cavendish Kinetics: Dennis Yost (left), president and CEO; and Larry Morrell, executive vice president
Thursday 13 June 2013
Ethernet access devices, where the enterprise meets the carrier: Q&A with Vitesse marketing director Uday Mudoi
Eight years ago, the Metro Ethernet Forum (MEF) defined the first carrier class networks and services for Ethernet, as well as specifying attributes such as quality of service, service management, reliability and scalability. Thus Carrier Ethernet was born. This marked the first time Ethernet services were standardized and it fueled a transformation in the telecom industry, with Carrier Ethernet replacing SONET/TDM as the service of choice for carriers, triggering the adoption in 100 countries, and building a market of US$40 billion in revenues, according to the MEF.However, while first-generation Carrier Ethernet enabled standardized Ethernet services delivered over a single provider's networks, Carrier Ethernet 2.0 (CE2.0), which was introduced in 2012, has the ability to deliver multiple classes of services (multi COS) over interconnected managed networks worldwide. During the Globalpress Electronics Summit 2013 and again at Computex Taipei 2013, Digitimes had the opportunity to talk with Uday Mudoi, product marketing director at Vitesse, about Carrier Ethernet and the boom in the Ethernet access market.Q: Can you tell us what have been some of the main market drivers in the transition from SONET/TDM to Carrier Ethernet?A: Some of the main reasons players in the market looked to Ethernet was because it was more cost effective and it made increasing bandwidth easier for the carriers. I remember one carrier telling me that back in the day, when a T1 subscriber (on SONET/TDM) complained about pricing the carrier would need to engage with that situation because there was not much else they could do. After the transition to Ethernet, if a customer complained they could easily just increase the bandwidth a little and the customer would stop complaining about pricing. The point of this example is that Ethernet is not only cheaper for the carriers; it is easier for them to scale.In terms of market growth, in 2007 Ethernet service revenues were a US$7 billion business for carriers and demand is expected to grow to US$48 billion by 2015. It is the highest growth areas for the carriers in terms of revenues.Moreover, what carriers have found out is that customers are willing to pay more for a differentiated service. For example, if a carrier has a service level agreement with a financial firm, the odds are likely that the customer would be willing to pay more for certain guarantees, such as if they knew their data will be secure. So many of the issues related to enabling revenue growth were addressed with MEF CE2.0.Q: MEF CE2.0 addresses interoperability between carriers and support for multiple classes of services, but these areas are not something new for carriers. What is the significance of MEF CE2.0?A: It is the standardization that is important. Companies are increasingly becoming more international and need global services to support people accessing the network worldwide. Unfortunately, no one carrier covers the globe completely, so interoperability and a common understanding between the various providers - who are partnered up to provide global services - are required.Let's look again at the financial firm example - that company's revenues will probably be based on how fast it can access and exchange data. So if it pays for a service that guarantees a certain access and certain latency, there needs to be an underlying understanding between the service provider (and its partners) and the customer for what each is putting in and getting out of it. Standardized services can create tiers for services as well.This means standardized services across carriers and customers are extremely important for allowing everyone involved to have the same understanding of what the subscribers are supposed to get from providers. This interoperability is required if you want to deliver global services.In addition, there are Opex issues that are addressed by CE2.0, such as service activation and how performance measurements can be done remotely in the field.Q: Another basic assumption is that carriers want complete management down to the customer premises. This is done through Ethernet access devices (EADs) or demarcation devices. According to Infonetics, the EAD market is expected to grow 81% between 2012 and 2016. Can you talk a little about the role of these devices in Carrier Ethernet and how Vitesse is helping enable this market?A: When a carrier delivers a service to a firm, it will place a box next to the customer premise equipment, and this box, or Ethernet access device, is used by the carrier to guarantee a certain performance and to manage the service remotely. This is the demarcation point between the firm's network and the carrier network.In terms of features, these boxes need to be cost sensitive and scalable. Maybe today you are servicing 10 users but in the future maybe it will be 100 or maybe you have 1Gb of bandwidth today and you want 10Gb tomorrow. Carriers don't want to have to change the hardware every time a service changes.Most importantly, these boxes need to support MEF CE2.0 so the carrier can deliver Carrier Ethernet services according to the customer's expectations.Vitesse comes into the picture because we saw that there was no silicon solution available that understands and supports MEF CE2.0. So we designed specific Ethernet switching solutions with a built-in layer in silicon that allows the box to be configured, managed and scaled compliant with CE2.0 standards. In terms of products, we have what is called Vitesse Service Aware Architecture (ViSAA) technology for our portfolio of Carrier Ethernet Switch Engines. ViSAA is integrated into the Ethernet switching layer silicon to provide a scalable, hardware-based solution for enabling MEF CE 2.0 Carrier Ethernet services in the EAD. And because CE2.0 is in the hardware, the solution is low cost and low power.Q: So you are enabling MEF 2.0 in hardware. How was it done previously?A: Previously, this was done in software because there was no silicon solution available. Most EADs were built around FPGAs.The biggest difference is performance. Software performance doesn't scale. If you want to add more users in software, the device may lack the processing power to handle it. In addition, there was no standard on the hardware side and the development cycle was longer.With our devices, we can guarantee our OEM customers that they can go to market in six months and get a MEF CE 2.0 certification as well.Q: What is driving the EAD market?A: One thing is cloud computing. It is very obvious that within the cloud, computing is increasing and that is driving bandwidth. But it is important to remember there are two parts to the cloud. One part is the computing itself while the other part is the access. What is interesting from our perspective is access to the cloud. How does the cloud connect to the WAN or connect to the cloud somewhere else?With cloud computing, if the network goes down the entire company could go down. So security, reliability and network performance are mission critical and some enterprises are holding off using the cloud for some services until they know it is secure. With our ViSAA technology, EADs can provision, manage, and allocate resources to tailored services while guaranteeing reliability and remote management capabilities. And with our Intellisec technology, we can deliver network-level security at a low cost, while maintaining performance.Another area driving growth in EADs is LTE. If network capacity increases 10x because LTE is enabled, that 10x of bandwidth needs to be managed in the access market as well.Q: You are currently providing a reference design for EADs. What kind of opportunities are there for Taiwan ODMs? Many of them are involved in networking and perform well when provided with a reference design.A: Vitesse provides the chip, design and software. But it is not as simple as putting it together and just manufacturing the box. The box sits in different environments. Carriers need reliable products, a long-lasting lifecycle and outdoor protection. In addition, the system is a lot more complex and requires a much longer lifetime compared to consumer and enterprise solutions. So there are design, certification and validation requirements before a carrier accepts a box.Currently in Taiwan we work with an IPC firm called Rubytech, a publicly listed company. The company's solutions are based on a Vitesse design and then customized including design manufacturing and add on services. The company has been working with Vitesse for a number of years and provides ODM manufacturing for various telecom equipment suppliers.However, although the telecom equipment market is dominated by just a few firms, there are opportunities for more manufacturers in the future. Carriers will look to the Alcatels and Lucents when it comes to core equipment but the access market is much more fragmented. In the access market, there are six companies that I know of from Israel alone involved in this market, and in emerging markets like Russia or India, the local carriers may want to support local players when it comes to providing access devices. This is where our reference design can help Taiwan ODMs in the long run. But they need to be committed to the market.Uday Mudoi, product marketing director at Vitesse
Friday 7 June 2013
Lifestyle and consumer electronics: Gajah offers total solutions ranging from hardware to e-content management
Singapore-based Gajah International Pte Ltd (Gajah) has won several Computex 2013 Design and Innovation Awards for a host of products, including its innovative InkCase, an e-paper based smartphone case that doubles as a second screen.According to Gajah CEO Yong Guan Jer, the company is a total solution provider that engages in the design and development of OEM/ODM consumer electronics products, as well as a comprehensive source for e-content management and delivery systems.Q: Please tell us more about your exhibits at Computex Taipei 2013, particularly the InkCase, which we understand is more than just a phone case. What's the concept behind such a product?A: The concept started from a friend who often prints his baby's photos on his iPhone case. As you know, babies grow quite fast, which is why he prints a new phone case almost every two weeks. Our internal Innovative R&D Team was developing a Bluetooth EPD unit that was initially meant for conference room signage, such as nameplates, and commercial signage. And we thought it might be a good idea to apply our research results to a phone case to display images of your loved ones. After that, we started brainstorming and came up with more ideas for the second-screen InkCase. We asked ourselves what the problems would be when a consumer had only a single screen on a smartphone, which is now a multi-function device. We found that users get quite frustrated when they are watching Youtube on their phone and an SMS or Whatsapp message suddenly comes in. They have to pause and switch to the SMS app or Whatsapp to read the message and then go back to reload their Youtube video, which again takes some time. We also found there are a lot of useful functions that a second screen can provide, while still using less power and therefore increasing the battery life for power-hungry smartphones.Q: There are many OEMs/ODMs in the consumer electronics market. Where does Gajah's competitiveness lie? What services do you provide to your customers, besides hardware design?A: Cost effectiveness, an innovative, world-class design team, and in-depth understanding of the industry. Especially for e-book readers, we have most of the codecs, DRM systems, content delivery systems - a complete solution from hardware and software to Web engines, Android and iOS applications - all catering for our partners' needs. We are not just an ordinary OED/ODM company; we provide complete solutions, or I should say, a complete ecosystem, to our partners. A lot of our customers say Gajah always provides innovative and design-oriented products and complete solutions with affordable and acceptable pricing.We have a team of engineers focusing on application development for Android OS and iOS. We develop unique applications and Web server engines in line with our product ranges to help our partners differentiate themselves and stand out in the market.Q: Gajah has been developing e-book readers, tablets and other accessories. While tablets are all the rage at the moment, the outlook for some of these other product areas is not so promising. Can you tell us your view on the prospects of products such as e-book readers in the face of competition from smartphones? Is Gajah also working on smartphones?A: We have six different business units: Mobile Media Products (MMP), which focuses on tablets and other mobile media products; Communication and Audio Devices (CAD), which mainly focuses on portable audio and conference devices; Home Connected Devices (HCD), which develops devices that connect the home to the Internet, such as TV boxes; Mobile Lifestyle Products (MLP), which develops lifestyle accessories for mobile phones; Specialized Mobile Media Products (SMMP), which focuses on e-book readers and educational projects; and Interactive Digital Media (IDM), which focuses on application development, servers and Web engines. We design a lot of unique tablets that have won several design awards. We understand that there are many trendy electronics devices in the market and we need unique things in order to differentiate our products from the competition and stand out in the market. We are focusing on lifestyle designs and most of our products are stylish companions for consumers. For example, our Gold Award-winning TV Box is not a conventional brick-sized box that one would want to hide in a drawer; it is a stylish item that blends into your living room.The e-book reader market has been growing slowly yet steadily. It remains quite popular due to the characteristic of the EPD panel, which provides comfortable reading and makes a more suitable reading device than tablets or smartphones.We no longer do MP3 players, because MP3, as well as GPS, has become just a function or application that is incorporated into smartphones. But we are not developing mobile phones as we don't have some of the cutting edge technologies in the smartphone sector with which we could compete against Samsung or Apple.Q: Who are your customers? Where are they? China seems to be an important market for Gajah, which has offices in Hong Kong and China. What are your plans for expanding your presence in the China market? Where else are you looking to expand?A: Our customers mainly come from the US and Europe. They are importers, local brand owners and retail chain stores. China is quite important for us, as the market is booming and demand for innovative products there is growing stronger and stronger. More consumers are looking for better products and the China market is more open to innovative IT products. It seems that many products are now launched in China first before entering other markets. We hope our partners in China can assist us in penetrating the China market, while our OEM/ODM businesses are focusing on expanding our reach to more partners in the US and Europe, as well as Latin America.Q: Where is manufacturing done? Does Gajah run its own manufacturing facilities or outsource to others?A: Our production is done in China. We design and handle the whole manufacturing process, workflow, testing and component supply chain, and then outsource the assembly process to contract manufacturers. We run through all the quality testing protocols and quality verification processes, which include reliability tests, component stress tests and others. In product development, the core competency is research and development, as well as quality testing and verification. That is why we focus a lot on this development process and outsource the assembly process to our contract manufacturer.Q: Many Taiwan-based manufacturers have been talking about moving manufacturing to Southeast Asia from China, where labor costs are rising fast. As a Singapore-based company, can you give us some insights into the pros and cons of manufacturing in Southeast Asia? How is the IT manufacturing environment in Southeast Asia?A: Supply chain issues are still the key challenge in SEA. The IT business is a fast turnaround business and most of the key suppliers are setting up their operations in Hong Kong, Shenzhen and Dongguan in China. We used to have our operational team in Singapore and tried to run the manufacturing process in SEA, but the supply chain was the main headache, as lead time would be much longer. IT manufacturing in SEA is quite popular for some more stable sectors and high-precision products, such as hard drives, servers, medical equipment and other industrial products. Fast-turnaround consumer electronics need to have a seamless supply chain to cater for the fast changes. By running manufacturing in SEA, you could enjoy much higher-quality output, as the area has a good track record in high-precision engineering. Before China opened its doors, SEA used to be a manufacturing base for the US and Europe. However, unless the supply chain can accommodate more rapid changes, I think it will be remain quite a headache to produce high-mix, high-volume products in SEA.Gajah International CEO Yong Guan Jer
Thursday 6 June 2013
Enabling IoT through Wi-Fi and Bluetooth: Q&A with Broadcom marketing director Jeff Baer
Just before Computex 2013, Digitimes spoke with Jeff Baer, marketing director for Embedded Wireless, Wireless Connectivity Combo at Broadcom to find out more about Broadcom's push into the embedded space with Wi-Fi and Bluetooth.Q: Lately there has been buzz about Broadcom levering its wireless technology in the embedded space, can you tell us about the progress Broadcom has made in this area?A: To start off, the Broadcom embedded business is kind of on the opposite end of the portfolio spectrum from our consumer business when it comes to technology and business model. In the consumer world, the focus of the business is on huge OEMs and huge ODMs, which work with a few well-defined applications. The embedded space that we are targeting, whether it is called Internet of Things (IoT) or machine to machine (M2M) communication, is more of a horizontal type of business, where all types of electronic devices will eventually be connected wirelessly.Q: How is Broadcom enabling customers?A: I represent a product family called WICED (Wireless Internet Connectivity for Embedded Devices). WICED (pronounced wicked) is a development system that vastly reduces the effort required to add wireless connectivity (mainly Wi-Fi and Bluetooth) to embedded devices. We launched the first WICED product about a year ago focusing on Wi-Fi and since then we have made more announcements, adding more legs on the WICED stool, so to speak. For example, we recently announced the availability of our Smart Development Kit with Bluetooth Smart system-on-a-chip (SoC), allowing for more development for battery-operated devices.We've continued building up the portfolio by adding the Broadcom 4390, which we've been talking about here at Computex. The 4390 is an SoC designed for 8 and 16 bit microcontroller systems. The 4390 basically delivers Wi-Fi connectivity to low-power and battery-powered devices. Initial applications that the BCM4390 will support include sports and fitness, health and wellness and security and automation. However, innovations based on the WICED platform can also help OEMs connect even the simplest appliances, including slow cookers, lights and more, with a single-chip. We've currently sampling the 4390 and we expect products based on the chip to hit the market by the end of the year.This is a crucial building block in the goal for end-to-end connectivity. Our vision is for everything to be wirelessly connected. We've seen this trend developing over some time and there is widespread belief that this going to happen. With the WICED architecture we are really enabling this with a couple of core pieces of technology that are derived from our industry leading Bluetooth and Wi-Fi products.Now, developers have a platform and the tools to implement Internet connectivity in a variety of devices, especially those without existing support for networking, like digital cameras, proximity tags and smart meters, etc.Q: What is driving this market?A: Historically, the wireless embedded market was more complex than it needed to be. It required some professional gateway or some expensive box to enable communication with a device. However, if you use Wi-Fi, you don't need some special hookup to connect. But ultimately the catalyst for this market and what is moving it forward is the growth of the smartphones and the tablet industry. All of these devices feature Wi-Fi. So now you have a device that is Wi-Fi enabled and that people are comfortable using. And pretty much everyone on the planet is carrying one of these devices around with them.Q: What do you mean by that?A: The console will be the smartphone. IoT allows the Internet to be gathered at all different sets of nodes and then kind of consolidated and moved to a common place either for analysis or action. For example, maybe you will have some sort of sensor in your shoe that tells you when have walked 10,000 steps or maybe you have some kind of heart monitor. The data gathered by these sensors can be transferred wirelessly to your smartphone and consolidated there. It can then be monitored or, more importantly, uploaded to the cloud - to some website where you can analyze it, and based on that analysis, take some action based on the events or simply keep track of the data on a day to day basis.Some of the usage models for these devices are kind of simple. A great example is some kind of medical equipment or pill dispenser. You can have that device enabled with wireless and basically you can monitor a person who is on medication but is living alone and report back on a daily basis with the data -when the medication has been taken or even if the patient is still alive with his or her vital signs being maintained in a predefined range. This saves the cost of having a nurse or medical practitioner from having to go out into the field and make a house call to check up on the patient, which is both expensive and not very scalable.Q: It is interesting that Broadcom is using Bluetooth and Wi-Fi for wireless communication in embedded devices. A lot of companies are using ZigBee?A: There are a multitude of different wireless protocols that are in a sense similar, but different. They all have strengths and weaknesses. ZigBee was one of the first wireless protocols in the embedded space. It went out there and got established a number of years ago and really targeted machine to machine or sensor type of applications at a time when there wasn't really any other technology that was being tailored for those types of applications. However, although it was out there first it really doesn't mean that it was the best technology over the long haul. In a couple of areas ZigBee has not stood the test of time. One major area is interoperability. The solutions that are out there are not particularly interoperable from solution to solution or from vendor to vendor.This is an area where both the Bluetooth SIG and the Wi-Fi alliance have done a really outstanding job of setting interoperability standards and enforcing those through really strict compatibility logo testing. So, if you have a Bluetooth device, you know it is going to be interoperable with all other Bluetooth devices. If you have a Wi-Fi device logo, you know the device is going to work with all your other Wi-Fi devices.The other issue with ZigBee is from a practical point of view. You can connect things but once they are connected, who do they talk to? The challenge is how do you get the data from the devices into a format that can be analyzed. For ZigBee you need some kind of specialized gateway or box and that box has to fit somewhere in your house or warehouse or factory. The fact is, this makes the entire system more complicated than it needs to be - people don't like to procure and support extra boxes.On the other hand, if these embedded devices are enabled with Wi-Fi and Bluetooth, they can communicate directly with a smartphone. This goes back to what I was saying before, that the catalyst that has led to the explosion in this area is the smartphones and tablets that already have Wi-Fi and Bluetooth. So if you have Bluetooth and Wi-Fi enabled devices, you can be confident that you can access them through your tablet or smartphone, or even your TV or PC. In this kind of usage model, you are much more likely to access and use that data, meaning it will add value to your life.Broadcom marketing director Jeff BaerPhoto: Company
Wednesday 5 June 2013
It's not just about silicon: Intel VP Jason Chen talks up the company's more veritical approach with ultra-mobile devices
Jason LS Chen is a vice president in the Sales and Marketing Group at Intel and is responsible for all sales and marketing activities for Intel in Taiwan. As country manager, he supports original equipment manufacturers (OEMs) and original device manufacturers (ODMs) in Taiwan. He also manages domestic market sales and marketing operations in Taiwan, including Computex. As Taiwan site manager, he is responsible for coordinating operations in Intel's Taiwan office.Digitimes spoke with Chen just before Computex Taipei 2013 to get a glimpse into the trends that are shaping Intel in Taiwan, especially in the mobile market.Q: So you pretty much run all of Intel's interests here in Taiwan?A: Not all. There is a small exception. Two OEM customers, Acer and Asustek Computer, have dedicated teams managing them. APAC manages those customers, as the complexity of these customers is quite high.Q: Since you deal quite a bit with customers, you must get a lot of feedback. How does that feedback translate into making sure future products address customer concerns?A: This process is actually managed through different sections of the company. People in the US are responsible for product definitions and features. They talk to Taiwan regularly, and once or twice a year they come to Taiwan and talk directly to customers. Of course customers will also talk directly with me about products. I provide that feedback to the product groups through regular executive communication. There is engagement on all levels.Q: Over the past few years there has been a shift in the mobile space from traditional PC products to ultra-mobile products like smartphones and tablets, in particular over the past two years. Can you comment on the trends and how this is affecting Intel?A: This is definitely the trend we are seeing here in Taiwan. In the tablet space, initially it was Apple driving the market; however, recently we have been seeing steep growth in non-Apple tablets. For example, Asustek is playing an aggressive role promoting its tablets in the market and we are also working with some customers in the tablet space.What we are seeing in this market is that user experience is becoming more important than product features. End users are paying increasingly more attention to what kind of experience they can get from their devices.The effect on Intel is that the traditional messages you see in the PC space concerning speed and leading-edge hardware specs are not enough for this market. We also need to delve more into how end-products are being viewed and used by users.Q: There is also a big difference in how mobile devices are sold compared with traditional PCs. Can you comment on the different distribution methods and how you are adapting to best target the various channels?A: In the past, there was a distribution channel for PCs and there was a distribution channel for mobile handsets. They were very different channels and there was little overlap. For example, the handset area was hugely involved with telecoms and subsidized products, while the PC industry was a retail distribution play. Starting with feature phones, smartphones and now with tablets, the products started to converge and channel players tried to cross over from each channel, but for the most part, there hasn't been a lot of crossover success.The key right now is tablets, because it is the only type of product that still exists in a gray area. So both channels are laying claim to it.For Intel, we want to be involved in all types of products – handsets, tablets and PCs – so while we are continuing to work with the PC distribution channel, we need to be more involved in the telecom channel.Q: What steps are you taking to be more involved and successful in the telecom channel?A: There is no magic formula for that. The way we can succeed there is by getting as many design wins as possible. To do that, we have to continue improving our silicon and that has been an ongoing process. We will be one step closer with products that are coming out later this year based on our Silvermont platform.The other area we are paying closer attention to is operating systems for ultra-mobile devices, namely Android. In the phone space you have to really figure out how to run Android optimally on your devices. For that we are working very closely with Google, making sure that Intel silicon has the right support in the Android space and embracing the mobile device ecosystem a lot more than we did before.Q: There are a lot of rumors in the market now that Intel will be making some big breakthroughs in the tablet market. Can you comment on any of those rumors?A: Of course we can't talk about any specifics in that area because of customer confidentiality but we are working very hard in this area. Personally, I think the right expectation is that the market will see a lot more Intel-based tablets in the future.Q: Concerning operating systems and software in general, can you comment on how Intel is addressing the increased focus on user experience in the mobile device market?A: This is a very complicated topic and perhaps your readers should check in with Intel during Computex, because we will have a lot to say about this. But for now, I can briefly say that for Intel, user experience is not just software. It is really the combination of hardware and software.Take NFC (near field communications) for example, to have a successful solution you need hardware features enabled with software capabilities, and then an app that takes advantage of those features in a seamless way. It is only when all those things operate in unison with each other that the technology becomes meaningful to the user, and a lot of the technology needs to be invisible to the user. You really need to pay attention to usage models.One of the reasons Apple has been so successful with its mobile devices is the touch experience it delivers to end users, and that success involves a lot of hardware tuning. However, Apple is coming from a background where it was enabling smaller displays, and that goes for most players working with touch.However, we work with a broad swatch of solution providers targeting various markets, so it can't be one strategy to fit all. Just one or two years ago, there weren't that many 11-inch touch displays in the market. Moreover, the usage models, what types of features are most important, are different when comparing phone displays with tablets, and even more so when larger-sized displays, say for a 21-inch all-in-one (AIO) PC, are taken into consideration.To address the various usage models and market segments, Intel introduced our touch-capacitive program last year at Computex. We are working with a few touch panel leaders like TPK Holding and Wintek, with the purpose of enabling the touch industry in areas we feel are being ignored.If you look at the AIO PC market for example, there are solutions available using today's technology, such as one glass solution (OGS), but that would be a very expensive solution for a 21-inch panel. So, we are approaching areas such as this to see if other technologies, such as film-based touch and optical touch, can enable the market in terms of cost and user experience.Q: Will Intel move into complete systems to address these challenges?A: I am not aware of any complete devices that Intel will be doing.However, Intel does recognize that we need to improve on platform technology. For example, we have a lot of experience in horizontal businesses, like the industrial PC industry. But we are more of a relative newcomer to phone markets, where the involvement is much more vertical.To be more successful in this vertical space, we need to have a better understanding of the entire ecosystem. It is not only about focusing on supporting products through our silicon, we need to focus on the full system in order to enable partners throughout the supply chain.Q: What about support on the operating system side? How different is it for Intel to work with Android compared with Microsoft Windows?A: With Windows, Microsoft makes it easy to add hardware support. To support new devices, the vendors usually simply provide a driver. Microsoft provides the interface for incorporating the driver and the process of getting the device supported is relatively straightforward. With Android you have to integrate support for new devices at the operating system level. If you want to incorporate new sensors or memory for the touch-panels, it all has to be fitted into the operating system. And you have to do it by yourself. We are now doing a lot of in-house development to support our own Android capabilities. We provide customers with the full Android stack, so part of the enablement is us being very involved in the Android BSP (board support package).As I mentioned previously, we also have to figure out how to best support a broad set of customers who are targeting different product segments and thus have different requirements. Not everyone will be involved in the same ecosystem.Q: There are a lot of expectations that the upcoming Bay Trail SoC will help Intel better succeed with design wins for mobile handsets. Can you comment on the upcoming Atom (Silvermont) platform?A: Silvermont is a very important development for Intel. The architecture will be based on a 22nm manufacturing process, which means we will be manufacturing Atom (Silvermont) at the same process as our Core architecture. This is the first time Intel has placed our low-power process at the same cadence as the most advanced Intel processor technology.Our latest Core (Haswell) will be on the market soon and that will target productivity platforms where the main focus is performance, but for the ultra-mobile space, where very thin designs or very small form factors are key, Atom will be the choice.Q: Atom has always been priced to be very affordable. Will the significantly improved power consumption (and related battery life issues) and performance boost of migrating Silvermont to 22nm make Intel rethink pricing on the Atom platform?A: Pricing will not see a big change.Q: This should represent some interesting opportunities in the mobile space. Atom was the foundation of low-priced netbooks. With the advances in power and performance, do you foresee a revival of the netbook market?A: Netbooks didn't really go away. Our Classmate PC is still shipping in good quantities in the global education market. Convergence devices in the tablet and netbook market will also be adopted in the education market. We expect both types of products to run in parallel in this area.Q: In terms of convergence devices, tablets have mostly been used for consuming content, but some users want more PC-type features in their tablets. With Bay Trail targeted at handsets, but being able to support full PC performance, do you see even more opportunities for convergence devices; for example, a US$200 Android PC?A: US$200 price points are very realistic for these devices.In terms of convergence PC devices, there could be interest in tablets that support full USB I/O so that a keyboard can be attached. We have plans to support these types of device in the market very aggressively, no matter the OS. Obviously, if users are looking for a keyboard, they are looking for productivity and that means better performance than most current tablets offer. These new types of convergence devices could be based on Android but this is really new so we don't know for sure what will happen.Jason LS Chen is a vice president in the Sales and Marketing Group at Intel and is responsible for all sales and marketing activities for Intel in Taiwan.Photo: Company
Wednesday 5 June 2013
Make good products and growth will come naturally: Q&A with Gigabyte notebook team
Despite being one of the market leaders in the motherboard and graphics card industries, Gigabyte Technology is relatively unknown as a notebook player. Digitimes recently sat down with Richard Ma, Gigabyte Senior Vice President, and Vincent Li, G-style Sales Division Director, to discuss the company's outlook for the notebook industry in 2013, and its plans for Computex Taipei 2013.Q: Gigabyte is mostly known for motherboards and graphics cards, where do notebooks stand in the company picture?Vincent: In terms of the Gigabyte Group, the motherboard business occupies around 55-60%, in terms of revenues, and another 15-20% is graphics cards. So motherboards and graphics cards occupy almost 80% of total revenues. Another 10% is networking related, including servers, client devices, set-top-boxes (STB) or hubs etc., and the last 10% is mobile devices including notebooks, mobile phones/smartphones.Q: Do you expect that percentage to grow, what is your outlook for 2013?Vincent: This year our market intelligence indicates that first quarter was not so good because the PC market on average dropped around 15% on year. Everybody expects the second quarter to see a smaller drop, around 5% compared to last year, meaning there's an average decline of 10% in the first half of the year.Q: When you say PC market does that cover notebooks, desktops, and tablets?Vincent: It doesn't include ARM-based tablets. I'm taking about what we'd call traditional PC businesses. Our goal is trying to find ways to expand or extend things that are PC related. Another way to think about it is that the boundary is devices that are x86 CPU related, and the other business is concerned with ARM-based devices. Speaking consumer wise, the tablet market has both kinds of devices, ARM-based and x86, so we divide that into Windows or Android-based, as well as iOS. In the past 2-3 years, ARM-based platforms have grown a lot, especially smartphones, and this has had a big impact on mobile PC platforms.The good thing for this year is that for Windows tablets we expect a lot of growth compared to last year. In the second half, x86 CPUs will become more power efficient while still keeping the same performance improvements compared to last year's models. This will help to evolve form factors, so you will see the new generation of Windows tablets being thinner and lighter, and more power-efficient, while also being more powerful. This is good for PC players focused on x86 platforms as we see an opportunity to grow the market based on the new-generation of x86-Windows-based tablets.Of course we also know that Windows 8 will be upgraded too, which will help improve the touch experience, and overall the tablet experience for consumer and business users.Overall I really expect big changes this year based on these factors.Q: What will be your major announcements at Computex 2013?Vincent: At Gigabyte we have three major themes: thin-and-light gaming, powerful ultrabooks, and Windows-x86-based tablets.For notebooks this year Intel will push ultrabooks as mainstream devices, basically we will see the ultrabook concept become the standard design seen in the market, and we'll see ultrabook influences across all platforms. At Gigabyte we have taken our focus from this trend, so you'll see ultrabook features in our gaming notebooks for example. In the past, the gaming platform was very powerful, but came with a huge form factor, so our concept and our technological focus has been how to design a powerful platform but in an ultrabook-like form factor.For gaming platforms, or what we are now calling ultra-gaming, you'll see our new P34 and P35 14- and 15-inch notebooks and later a P37 17-inch model. The main concept behind this series is taking gaming performance hardware into an ultrabook design, a very slim design. The 14-inch model is just 21mm thick, and the definition for ultrabooks in 21mm. Inside we have put Nvidia GTX-level graphics, and to achieve this we designed two isolated and powerful fan outlets in order to handle the output from the GPU. However, the weight is only 1.9kg for the 14-inch model.Another key design concept has been that all our units carry a full set of I/O ports in order to provide the end user with the best usage experience. It's not like with other ultrabooks where you have to sacrifice ports in order to get the design very slim. We think that the user doesn't want to sacrifice.Regular notebook gaming machines are conventionally huge, and very heavy. These types of machines are popular among teenagers and students, but when these users graduate and start work they want to share the same device for work but don't want to carry around a heavy box, and don't want to have a device that has lots of bright LEDs and attention grabbing colors which is common in gaming platforms. Our aim has been to deliver gaming-level performance in a design that is not going to stand out in an office cubicle or meeting room.Richard: Gigabyte already has a top-tier reputation with our motherboard and graphics card engineering and quality. For notebooks, if you want to put Core i7 and GTX 7-series level hardware inside such a slim box and without sacrificing performance like we have, the technology demands are really high in terms of the electronic and thermal integration.Then for storage we still provide a regular hard drive in order to deliver capacity, but we've also implemented support for two mSATA sockets. So we can do SSD RAID for performance, again without sacrificing capacity and still keeping the slim form factor.Also in the 15-inch model we have accommodated an optical drive, which we think meets the most common usage needs at this level. This is a 2.3kg device so we wanted to use that extra space to provide the full usage experience.Q: That's gamers covered, what about the mainstream market?Vincent: For ultrabooks this year the standard, as defined by Intel, says they must be equipped with a touch screen, so that's the key feature you will see in our models for this segment.For ultrabooks we think there are three main customers: users that are concerned about the weight, ones about performance, and ones that focus on price. In the market you can find a machine that has a good price, and it's very slim, perhaps only 15mm, but you have to sacrifice at lot in terms of performance, I/O ports, and storage.For our ultrabook strategy we don't want to compromise on these key features, so our devices meet the standard for ultrabooks - battery-life, thickness, and touchscreen, all these comply with the ultrabook definition - however we don't want to sacrifice performance, so we have implemented the highest performance Nvidia GT graphics, just below GTX performance, and for storage we have a regular HDD alongside mSATA so we can have speed and capacity at a good price combination. Also the main thing is we still have all the I/O ports. That's why we call this category ultra-performance.Richard: I think for users that want an ultrabook but with performance we provide the most balanced design.Q: Of course everyone expects tablets to be big this year, what new products will Gigabyte have to offer?Vincent: Our flagship product for this segment is a design we call the Padbook. This can also be classified in the ultrabook category as we've designed it with a detachable keyboard.If look at the tablets on the market there are no devices with full I/O, but we've got USB 3.0, microSD, audio and HDMI, and we even managed to put in VGA because when you go out to do a presentation or something, the common projector standard is still VGA.Another highlight of this design is the finger mouse, this allows the user to hold it like a tablet and still use Windows in desktop mode which requires a very precise pointer.The final highlight of this tablet is the keyboard. It is not like a traditional 10- or 11-inch keyboard, it's actually full size, it's actually the same as you will find on our 14-inch ultrabooks. This makes such a big difference to the experience. It allows full-speed typing so productivity is not affected. And then we have a trackpad too, which is not common in keyboards for tablets, but we feel that it was an important feature to retain because when you go out, you don't want to have to remember to bring a mouse. It's all about providing "everywhere usage".On the market there are a lot of ARM-based tablets, which are very light and offer long battery life. However, when we design tablets for Windows we want to allow the user to leverage the best experience of the OS. So that means compatibility and also I/O, otherwise why choose Windows? People want touch, but they also want the convention Windows experience. That was our target. That was our original design philosophy.Richard: When people look at our device, what it can offer, they see they don't need an Android device and a Windows PC, they don't need two units, this device meets all those criteria - the form factor, the cost, and the features - again we feel that our design represents the most balanced in the segment.Q: Based on the devices we've discussed so far it seems fair to say you are targeting the high-end market. Is that your strategy - to focus on the high-end and leave the low-end market to others?Vincent: Despite the size and long pedigree of Gigabyte, in terms of mobile devices we are still a newcomer to the market. So at this stage our focus is on the mid-to-high-end market. This allows us to demonstrate our technological capabilities, our wealth of engineering expertise, by giving end-users a premium experience. People ask me, "Don't you want volume?" I do, but at this moment, that would not be the right strategy for us. That is why we have chosen the three-category product strategy we discussed, these give us the opportunity to really highlight the technology, and bring value to the market.Another consideration is that our economic scale is not that big, so trying to compete on price would not be a good strategy for us.Richard: One thing to note is all these products are designed in Taiwan and made in Taiwan. Maybe one or two components are sourced from China, but all design comes from our in-house teams, our industrial design, mechanical engineering, electronics and software/BIOS, everything is done in Taiwan. This is very special compared to the rest of the market. We design ourselves, we manufacture by ourselves at our factory in Taoyuan, and we do all the sales and marketing ourselves from here in Taipei.Q: Doesn't manufacturing in Taiwan limit your volumes?Richard: Right now, we aren't targeting volumes of one million units a month. Our focus is brand positioning, we want consumers to be aware of our products and our value-added technology. If we focus on building a strong brand now based on features and quality, we are confident that our market will grow and grow.Q: Recently we've seen quite a few novel designs for convertible notebooks - for example dual touch screen designs or ones where the keyboard and trackpad are reversed so that the screen can be positioned between the two. Why has Gigabyte stuck with the more traditional rotating hinge or book designs?Richard: There are a lot of ways to implement this concept - when you want both the traditional notebook and tablet usage modes in the same device. Our research and testing into the different form factors showed that these implementations satisfy to widest range of applications. We feel you need to consider the most common applications foremost, and make sure not to compromise usability there.For example in the design you mentioned where the keyboard is moved to the bottom, first this means you lose the use of the trackpad when in folded mode, and more importantly you lose the palm rest space. If the keyboard is on the bottom edge where do your palms rest? How do you maintain a comfortable wrist angle without getting tired? And how can you use this on your lap without it moving? There's no point changing a design just to be new or different if it means sacrificing usability. Sometimes traditional designs are best.Q: What are your key markets currently, and what are your aims for 2013?Richard: Because right now our volumes are not very high, there is little difference in our market shares across regions. North America, Europe, Asia, in each region the picture is about the same. There are certain countries such as China and India where the market is so big, then volumes are a bit higher, but product prices there are a bit lower too. In these cases we have to adjust our product mix and change the SKUs on offer, and so overall things balance out.Our long term policy is to enlarge the mid-range to high-end segment, even in emerging markets, and to remain responsive to the market. Building brand image with good products, good designs, and good features and innovation will remain the main focus.Compared to say five years ago the market has changed considerably. It used to be that you knew what to do for each season, when to release new products, when to clear stock, when to cut prices etc. It used to be you knew what to do each quarter. But in the past two years the market has changed, sometimes you can't accurately see two months ahead. We have learned a lot. We are careful about setting strategy, but we also have to be able to listen to market feedback and respond to change quickly.For Gigabyte I think the important thing for us it to continue to show passion. We need to bring value to the market for customers, and differentiate by leveraging our specialties. I don't want to be like a typical PC player pushing out new products just to follow the latest tread, or cutting prices to sell more. If we can make really good products growth will come naturally. That is my philosophy.Richard Ma, Gigabyte Senior Vice PresidentPhoto: Company