Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Last updated : Oct 26, 2022
NOT ON THE CURRENT EDITION
This blip is not on the current edition of the Radar. If it was on one of the last few editions, it is likely that it is still relevant. If the blip is older, it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar. Understand more
Oct 2022
Assess ?

We continue to be excited by the TinyML technique and the ability to create machine learning (ML) models designed to run on low-powered and mobile devices. Until recently, executing an ML model was seen as computationally expensive and, in some cases, required special-purpose hardware. While creating the models still broadly sits within this classification, they can now be created in a way that allows them to be run on small, low-cost and low-power consumption devices. If you've been considering using ML but thought it unrealistic because of compute or network constraints, then this technique is worth assessing.

Mar 2022
Assess ?

Until recently, executing a machine-learning (ML) model was seen as computationally expensive and in some cases required special-purpose hardware. While creating the models still broadly sits within this classification, they can be created in a way that allows them to be run on small, low-cost and low-power consumption devices. This technique, called TinyML, has opened up the possibility of running ML models in situations many might assume infeasible. For example, on battery-powered devices, or in disconnected environments with limited or patchy connectivity, the model can be run locally without prohibitive cost. If you've been considering using ML but thought it unrealistic because of compute or network constraints, then this technique is worth assessing.

Published : Mar 29, 2022

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes