Thu. Nov 7th, 2024

Artificial Intelligence Unit

This is the first IBM’s complete system-on-chip designed to run and train deep learning models faster and more efficiently than a general-purpose CPU.

Source IBM Officials

How AI emerged?

Modern AI first emerged ten years ago. A group of academic researchers demonstrated how a deep learning model could be trained to recognize objects and animals in completely fresh images using millions of photos and days of brute force computation. Deep learning now does millions of time-saving activities in addition to categorizing images of animals and dogs, translating languages, and spotting malignancies in x-rays.

How Artificial Intelligence Unit going to solve computation problem?

There is only one issue. Computing power is becoming scarce. Even if the size of AI models is rising exponentially, the hardware needed to train them and run them on cloud servers or edge devices like smartphones and sensors hasn’t improved as quickly. For this reason, the IBM Research AI Hardware Center made the decision to develop an AI-specific computer chip. It is what we refer to as an Artificial Intelligence Unit.

Standard processors known as CPUs, or central processing units, the workhorse of conventional computing, were created prior to the revolution in deep learning, a branch of machine learning that generates predictions based on statistical patterns in large data sets. Applications for general-purpose software are ideally suited to the flexibility and high precision of CPUs. However, those advantageous traits put them at a disadvantage when it comes to deep learning model training and operation, which demand massively parallel AI processes.

Source IBM Officials

Why do we need an AI chip? Let’s understand with live use case.

A automobile with a gasoline engine may be able to run on diesel, but the correct fuel is required if optimizing speed and efficiency is the goal. In AI, the same idea holds true.

In other words, a general-purpose CPU is insufficient to complete our task of computing a large volume of data.

When what we truly required was an all-purpose chip tuned for the types of matrix and vector multiplication operations used for deep learning, we’ve been using CPUs and GPUs, graphics processors made to output visuals for video games, for the past ten years to run deep learning models. The last five years have been devoted at IBM to working out how to create a semiconductor specifically for the statistics of contemporary AI.

IBM decided to find new ways to advance. And whats are those?

Accept lower precision, first.

Unlike a CPU, an AI chip does not need to be as accurate. We are not calculating the trajectory for a spacecraft to land on the moon or counting the hairs on a cat. We’re making forecasts and choices that don’t need to be made at a granular level.

IBM may switch from 32-bit floating point arithmetic to bit-formats carrying a quarter as much information using approximation computing, a method invented by IBM. This streamlined approach significantly reduces the amount of computation required to train and maintain an AI model without compromising accuracy.

Moving data to and from memory is another speed-limiting factor that is reduced by leaner bit formats. Our AIU makes operating an AI model far less memory heavy by utilizing a variety of lower bit formats, including both floating point and integer representations. We take advantage of significant IBM innovations from the previous five years to determine the ideal balance between speed and accuracy.

Two, an AI chip should be designed to streamline AI processes. Our chip architecture has a simpler layout than a multi-purpose CPU because the majority of AI operations involve matrix and vector multiplication. Additionally, the IBM AIU is built to transmit data directly from one compute engine to the next, which results in significant energy savings.

An “application-specific integrated circuit” is the IBM AIU (ASIC). It is built for deep learning and can be trained to perform any deep-learning activity, including processing spoken language or text and images on a screen. Our entire system-on-chip has 32 computing cores and 23 billion transistors, which is about the same quantity as our z16 device. The IBM AIU is meant to be just as user-friendly as a graphics card. It can be connected to any PCIe slot-equipped computer or server.

This chip wasn’t created from scratch altogether. Instead, it is a scaled-up version of an already successful AI accelerator that is included in our Telum chip. The 32 cores in the IBM AIU closely mirror the AI core found in our most recent IBM z16 system’s Telum processor. (Telum employs 7 nm-sized transistors, whereas our AIU will use quicker, even smaller 5 nm-sized transistors.)

The AI Hardware Center’s aggressive roadmap to increase IBM’s AI processing firepower produced the AI cores incorporated into Telum and now, our first specialized AI chip. We have only begun to scratch the surface of what artificial intelligence (AI) can do, particularly for the industry, because of the time and money required for training and maintaining deep-learning models.

To address this gap, we established the AI Hardware Center in 2019, with the goal of doubling the annual efficiency of AI hardware. Our objective is to train and execute AI models 1,000 times quicker than we could three years ago by the year 2029.

Using artificial intelligence to identify cats and dogs in images is a fascinating academic project. But it won’t deal with the urgent issues we have right now. We need enterprise-grade, industrial-scale hardware for AI to handle the challenges of the real world, such as forecasting the next Hurricane Ian or whether a recession is on the horizon. We move closer thanks to our AIU. We aim to announce information regarding its availability soon.

Article Source IBM Official Announcements

Questions and Answers?


What is Artificial Intelligence Unit?

The H2O AI Cloud’s maximum resource consumption is measured by an AI unit (HAIC). The following is a formal definition of an AI unit: An AI Unit is an H2O AI Cloud (HAIC) unit of measurement used to track how many CPUs, GBs (gigabytes of RAM), and GPUs are used by a certain company on the platform.
Which 4 forms of AI are there?

The four main categories of AI now recognized are reactive, limited memory, theory of mind, and self-aware.
What is an Artificial Intelligence CPU?

Intel AI processors are designed from the ground up to overcome memory and data flow bottlenecks currently in place, enabling distributed learning algorithms and systems that will scale up deep learning reasoning and use more sophisticated types of AI to go beyond simply converting data into information and turn data into global knowledge.
What is the best Artificial Intelligence processor?

The Intel Xeon W and AMD Threadripper Pro CPU platforms are the two that are suggested. This is due to the fact that both of them have outstanding dependability, can provide the PCI-Express lanes required for numerous video cards (GPUs), and have top-notch memory performance in CPU space.
Does Artificial Intelligence have a chip?

The TPU is the name of Google’s newest AI chip (Tensor Processing Unit). It was created especially for machine learning tasks. The TPU is substantially quicker than conventional CPUs and GPUs, doing trillions of operations per second.
What are AI chips called?

Field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and graphics processing units (GPUs) are examples of chips used in artificial intelligence (AI).
Is AI CPU or GPU?

FPGAs, GPUs, and CPUs are the three primary types of hardware available for AI. FPGAs and GPUs offer advantages in learning and reaction time for AI applications where speed and reaction times are crucial.
What is the world’s strongest AI?

The most potent supercomputer for artificial intelligence in the world is unveiled by Meta.
What is the strongest AI?

Summit is the world’s quickest supercomputer and the first to achieve exaop (exa operations per second) speed.
Who is leading in AI chips?

Cerebras Systems
In April 2021, the company presented Cerebras WSE-2, an AI chip model with 850,000 cores and 2.6 trillion transistors. Without a doubt, the WSE-2 performs better than the WSE-1, which has 1.2 trillion transistors and 400,000 computing cores.
Is AI used in nuclear weapons?

Nuclear stability may be aided by artificial intelligence if it helps with decision-making and communication reliability. In fact, artificial intelligence is already included in a number of nuclear command, control, and communication systems, including early warning systems.
What is Tesla’s AI called? What is Elon Musk’s AI?

Optimus
Tesla, Inc. is creating Optimus, commonly referred to as Tesla Bot, a general-purpose robotic humanoid. On August 19, 2021, the corporation held an Artificial Intelligence (AI) Day celebration. Tesla’s CEO Elon Musk asserted at the occasion that a prototype will probably be built by 2022.

Leave a Reply

Your email address will not be published. Required fields are marked *