Echo Dot (3rd Gen) - Smart speaker with Alexa - Charcoal

Use your voice to play a song, artist, or genre through Amazon Music, Apple Music, Spotify, Pandora, and others. With compatible Echo devices in different rooms, you can fill your whole home with music.

Buy Now

Wireless Rechargeable Battery Powered WiFi Camera.

Wireless Rechargeable Battery Powered WiFi Camera is home security camera system lets you listen in and talk back through the built in speaker and microphone that work directly through your iPhone or Android Mic.

Buy Now

Specialized AI Chip Market Seen Expanding Rapidly

0
116



Specialized AI Chip Market Seen Expanding Rapidly 1
The Cerebras Wafer Scale Engine powering the CS-1 chip shown here is said by the company to be 56 times the size of the largest GPU. (CEREBRAS)

By AI Trends Staff

The fragmenting and increasingly specialized AI chip market will cause developers of AI applications to have to make platform choices for upcoming projects, choices with potentially long-term implications.

AI chip specialization arguably began with graphics processing units, originally developed for gaming then deployed for applications such as deep learning. When NVIDIA released its CUDA toolkit for making GPUs programmable in 2007, it opened the market up to a wider range of developers, noted a recent account in IEEE Spectrum written by Evan Sparks, CEO of Determined AI.

GPU processing power has advanced rapidly. Chips originally designed to render images are now the workhorses powering AI R&D. Many of the linear algebra routines necessary to make Fortnite run at 120 frames per second, are now powering the neural networks at the heart of advanced applications in computer vision, automated speech recognition and natural language processing, Evans notes.

Market projections for specialized AI chips are aggressive. Gartner projects specialized AI chip sales to project to $8 billion in 2019 and grow to $34 billion by 2023. NVIDIA’s internal projections reported by Evans have AI chip sales projected to reach $50 billion by 2023; most of those anticipated for data center GPUs used to power deep learning. Custom silicon research is ongoing at Amazon, ARM, Apple, IBM, Intel, Google, Microsoft, NVIDIA and Qualcomm. Many startups are also in the competition, including Cerebras, Graphcore, Groq, Mythic AI, SambaNova Systems and Wave Computing, who together have raised over $1 billion.

Allied Market Research projects the global AI chip market to reach $91 billion by 2025, with growth rates of 45% a year until then. Market drivers include a surge in demand for smart homes and smart cities, more investment in AI startups, the emergence of quantum computing and the rise of smart robots, according to a release from Allied on the Global Newswire. Market growth, however is being slowed by too few skilled workers.

The market splits into chip type, application, industry vertical, technology processing type and region, according to Allied. The chip types are divided into the GPU, the application-specific integrated circuit (ASIC), the field-programmable gate array (FPGA), the central processing unit (CPU) and others. The ASIC segment is expected to register the fastest growth at 52% per year until 2025.

At the recent International Electron Devices Meeting (IEDM) conference in San Francisco, IBM discussed innovations into making hardware systems that advance with the pace of demands of AI software and data workloads, according to an account in Digital Journal.

Among the highlights: nanosheet technology aims to meet the requirements of AI and 5G. Researchers discussed how to stack nanosheet transistors and multiple-Vt solutions (multi-threshold voltage devices).

Phase-change memory (PCM) has emerged as an alternative to conventional von Neumann systems to train deep neural networks (DNNs) where a synaptic weight is represented by the device conductance. However, a temporal evolution of the conductance values, referred to as conductance drift, poses challenges for the reliability of the synaptic weights. IBM presented an approach to reduce the impact of PCM conductance drift. IBM also demonstrated an ultra-low power prototype chip, with the potential to execute AI tasks in edge computing devices in real time.

An example of a specific application driving an AI chip design is happening at the Argonne National Laboratory, a science and engineering research institution in Illinois. Finding a drug that cancer patients can best respond to, tests the limits of modern science. With the emergence of AI, scientists are able to combine machine learning and genomics to sequence data and help clinicians better understand how to tailor treatment plans to individual patients, according to an account in AIMed (AI in Medicine).

Argonne National Lab Employing CS-1 for Cancer Research

Argonne recently announced the first deployment of a new AI processor, the CS-1, developed by Cerebras, a computer systems startup. The chip enables a faster rate of training for deep learning algorithms. CS-1 is said to house the fastest and largest AI chip ever built.

Rick Stevens, Argonne Associate Lab Director for Computing, Environment and Life Sciences, stated in a press release, “By deploying the CS-1, we have dramatically shrunk training time across neural networks, allowing our researchers to be vastly more productive.”

CS-1 also has the ability to handle scientific data reliably and in an easy to use manner, including higher-dimensional data sets with data coming from diverse data sources. The deep learning algorithms developed to work these models are extremely complex, compared to computer vision or language applications, Stevens stated.

The main job of the CS-1 is to increase the speed of developing and deploying new cancer drug models. The hope is that the Argonne Lab will arrive at a deep learning model that can predict how a tumor may respond to a drug or combination of two or more drugs.

Read the source articles in IEEE Spectrum,  on the Global Newswire, in Digital Journal and in AIMed (AI in Medicine).



Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here