Promote generative AI pervasive Intel "obliged to"
Intel academician, global chief technology officer of big data technology, said in an interview with reporters, Intel is committed to generative AI pervasive, so that all walks of life, as well as ordinary consumers can use.
Generative AI is ubiquitous
In order to realize such a vision, on the one hand, as an important part of meeting the arithmetic needs of generative AI and large language models, diversity in software algorithm design is essential. Intel embraces open source and the AI open community, for example, cooperating with TensorFlow, PyTorch, Hybrid Bonding, OpenAI, etc., to make a difference in the software tools and software infrastructure of open source AI, automatically generating the underlying, optimal code for different hardware platforms, and realizing more efficient development of upper-layer frameworks and algorithms. At the same time, Intel and Hugging Face have cooperated on the open big language model.
On the other hand, Intel supports generative AI ubiquity through the optimization of computing power and the improvement of computing power. Whether it's GPUs, AI gas pedals similar to Gaudi 2, and the newly released fourth-generation Intel Xeon Scalable processors added gas pedals (Intel AMX) specialized in matrix computing to accelerate neural network computing.
Previously, Hugging Face has shown that Stable Diffusion runs 3.8 times faster on average on fourth-generation Intel Xeon Scalable processors with Intel Advanced Matrix Extensions (Intel AMX) built in. Moreover, this acceleration was achieved without changing any code.
In addition to the big language model, Intel can run Stable Diffusion on a 12th generation Core laptop without any discrete graphics, directly utilizing the integrated graphics, and generate a picture in 20 or 30 seconds with a good user experience.
In short, in the entire computing field, for the local end, cloud, edge, from small-sized devices to large-scale data centers, Intel put forward the "software-defined, chip-enhanced" strategy, that is, users need to define what kind of computing power they need with the help of software, and then from the perspective of the hardware for better enhancement and support. Intel has proposed the strategy of "Software Defined, Chip Enhanced". In the face of different applications and scenarios, Intel supports the application of generative AI by providing a full range of "intelligent computing" capabilities.
Responsible AI
Intel is not satisfied with making AI ubiquitous, in fact, the big language model is not only to optimize the productivity process, but also to bring data security and privacy issues.
When introducing new technologies, especially AI, it is important to have a scientific and data foundation and to guide the whole process through governance. Intel proposes Responsible AI, which executes rigorous multidisciplinary reviews throughout the development lifecycle, including building diverse development teams to minimize bias, developing AI models that follow ethical principles while evaluating the results broadly from both a technical and a humanistic perspective, and collaborating with industry partners to reduce the potential negative impacts of AI.
Intel is committed to building responsible AI and has done a lot of work related to data security and privacy computing. With Intel hardware-level security technologies, such as Intel TDX and Intel SGX that can avoid data out of the domain and reduce the risk of data leakage, coupled with the privacy computing platform (BigDL PPML, Privacy Preserving Machine Learning) built on the software layer for big data analytics and machine learning, and combined with the Big Language Model and the Stable Diffusion, it is possible to protect generative AI applications in terms of both data and models to ensure data security and privacy. In addition, Intel algorithmically determines machine-generated content to ensure that it is authentic.
Running, for example, Stable Diffusion's large language models on laptops, this not only lowers the threshold of AI use, so that consumers can use these generative AI, but on the other hand, it also greatly protects the privacy of the data model, because you can deploy the entire generative AI, the large language models locally, the algorithms, the applications, the data are all locally, and you don't need to share them with other people.
All in all, Intel's goal is to uphold a responsible attitude and principles, and work closely with the entire industry to support a variety of AI models, including generative AI.