CONNECT WITH US

As generative AI sweeps across the globe, DeepMentor's DeepExpert lowers threshold for enterprises to implement generative AI projects

News highlights 0

Leo Yang, Chief Marketing Officer of DeepMentor and Jack Wu, CEO of DeepMentor. Credit: DIGITIMES

Launched in late 2022, ChatGPT showed the world the almighty power of generative AI, thus paving the way for the AI 2.0 revolution. At the heart of generative AI is so-called the large language model (LLM), which is trained on large amounts of data. ChatGPT, for example, is based on either GPT-3 with 175 billion parameters, or GPT-3.5 with 200 billion parameters. Currently, apart from Open AI, Taiwan Web Service's (TWS) Formosa Foundation Model (FFM), and other paid LLMs offered by domestic and foreign companies, there are many more open-source LLMs on the market, such as Hugging Face's BLOOM and Meta's Llama 2. These LLMs are equipped with 7 to 176 billion parameters and can be used legally in commercial environments.

Both commercially licensed and free open-source LLMs, once trained on proprietary data, can be utilized by companies in a variety of ways. For example, LLMs can be used for intelligent customer service, code writing, or improvement of efficiency of corporate information utilization. Such applications have significant potential to increase business productivity and industrial competitiveness. Therefore, numerous providers on the market have claimed to help with the implementation of generative AI. However, only a few companies have invested in such projects, while most are adopting a wait-and-see attitude.

Leo Yang, Chief Marketing Officer of DeepMentor, pointed out that despite the strong demand for generative AI and the urgent need for its early adoption to improve industry competitiveness, companies are still hesitant for three main reasons, i.e., confidentiality and security considerations, high implementation costs, and lack of AI talents. Even if companies are interested in moving forward with generative AI projects, most do not know where to start, as there is a frenzied scramble for AI talent worldwide. In addition, most vendors offer generative AI services on cloud-based platforms, leaving companies concerned about the potential leakage of confidential data. Finally, the cost of training AI models in the cloud can easily reach Hundreds of thousands or even millions of dollars, as most proprietary models need to be trained three to five times to meet customer requirements. These high costs deter many companies from implementing AI projects.

Customized Generative AI Designed for Cloud or On-Premises Use Drastically Reduces Project Costs and Time

To summarize the three factors mentioned above, only a few large companies have enough AI talents and capital to invest tens of millions of dollars at a time in AI servers to satisfactorily perform in-house AI model training and inference. Most medium-sized companies can only look on in envy, not to mention small and medium-sized enterprises with limited budgets and manpower.

To help more enterprises and educational institutes adopt generative AI, DeepMentor launched a comprehensive hardware and software solution for designing generative AI, the Mentor series (basic to advanced versions). Customers can select to conduct different levels of all-parameter LLM training (please see the diagram below) based on hardware on Mentor Series. DeepMentor helps customers verify the training results versus actual application needs. Take the basic Mentor-100 as an example. Mentor-100 is a product of collaboration with a major industrial computer brand, Axiomtek, which specializes in creating small-scale edge computing AI servers that are quiet, energy-saving, and optimal for office hardware and software architecture. Used in conjunction with Fine-tune Expert, a training software in DeepMentor's pre-installed GAI application software package, DeepExpertTM, as well as the training counseling curriculum, enterprises from different industries and units of different levels can easily adopt applications that cater to their internal needs.

Jack Wu, CEO of DeepMentor, believes that the ability to perform inference using LLMs with 7B or 13B parameters is an important step in advancing AI projects. First, even if medium or large-sized enterprises have sufficient financial resources, it may not be easy to obtain a project budget of more than ten million New Taiwan dollars if there is no supporting data to prove the viability of the project. Whereas, Mentor-100's favorable pricing is very well suited for companies to conduct early-stage proof of concept (POC) on a small scale internally. For schools, research institutes, and small and medium-sized enterprises, LLMs with 7B, 13B, or 33B parameters are more than sufficient for relatively simple usage scenarios or for performing POCs. In these cases, it is not necessary to use an LLM with 176B parameters.

It is worth noting that DeepMentor, as one of the few companies focusing on providing generative AI solutions with a considerable track record, was invited to participate in the testing and promotion of the TAIDE program, a Taiwanese LLM in traditional Chinese announced by the National Science and Technology Council.

Six Usage Scenarios Accelerate Implementation of Generative AI

DeepMentor's DeepExpert solution has two unique features. First, depending on their business needs, customers can choose to use open-source LLMs of different types and parameters, such as Meta's Llama 2, or FFM, a LLM licensed by DeepMentor's partner TWS and pre-trained on traditional Chinese corpuses. Second, for the hardware architecture, DeepMentor is working exclusively with Phison Electronics in using their NVMe module, which is specifically designed for generative AI training and can overcome the memory capacity limitations of current GPU cards. This allows AI models to be trained with just two NVIDIA RTX A4000-ada GPU cards instead of the more expensive NVIDIA A100/H100 chips. This feature can significantly reduce the overall costs, allowing even small and medium-sized enterprises or academic institutions with limited budgets to take on generative AI projects and see the results of inference in only approximately two weeks.

Leo Yang pointed out that in addition to the DeepExpertTM solution, DeepMentor also offers pre-trained and optimized AI application modules for six common usage scenarios, namely Document Expert, Code Expert, Customer Service Expert, Meeting Expert, Image Expert, and Fine-Tune Expert. Coupled with the self-developed and easy-to-use No Code tool, these modules help enterprises quickly integrate their own data to perform training and inference of their proprietary AI models. As a result, to date, they already have government agencies, financial institutions, publicly listed IC design companies, well-established enterprises from traditional industries, and retailers running point of business (POB) testing with DeepMentor to verify DeepExpertTM's outstanding capabilities as a one-stop generative AI implementation solution.

DeepMentor has been working with Taiwan Tech Arena (TTA) since 2018 and has participated in a number of business matchmaking events, technical presentations, etc. The collaboration has been helpful in raising the company's profile and securing venture capital investment. DeepMentor hopes to maintain a close and cooperative relationship with the TTA in the future to lay a solid foundation for expansion into overseas markets.

Enterprise-level GAI On-Premises solution of DeepMentor

Enterprise-level GAI On-Premises solution of DeepMentor
Photo: DeepMentor