DATE Save the Date 17 to 19 April 2023


Dear DATE community,

We, the DATE Sponsors Committee (DSC) and the DATE Executive Committee (DEC), are deeply shocked and saddened by the tragedy currently unfolding in Ukraine, and we would like to express our full solidarity with all the people and families affected by the war.

Our thoughts also go out to everyone in Ukraine and Russia, whether they are directly or indirectly affected by the events, and we extend our deep sympathy.

We condemn Russia’s military action in Ukraine, which violates international law. And we call on the different governments to take immediate action to protect everyone in that country, particularly including its civilian population and people affiliated with its universities.

Now more than ever, our DATE community must promote our societal values (justice, freedom, respect, community, and responsibility) and confront this situation collectively and peacefully to end this nonsense war.

DATE Sponsors and Executive Committees.


Kindly note that all times on the virtual conference platform are displayed in the user's time zone.

The time zone for all times mentioned at the DATE website is CET – Central Europe Time (UTC+1).

11.8.1 Automating Tiny Neural Network Design with MCU Deploy-ability in the Loop

Start
End
Speaker
Danilo Pau, STMicroelectronics, Italy

Tiny Machine Learning (TinyML) is a growing, widely popular community focusing on the deployment of Deep Learning (DL) models on microcontrollers (MCUs). To run a trained DL model on an MCU, developers must have the necessary skills to handcraft network topologies and associated hyperparameters to fit a wide range of hardware requirements including operating frequency, embedded SRAM and embedded Flash memory along with the corresponding power consumption requirements.

Unfortunately, a hand-crafted design methodology poses multiple challenges: 1) AI and embedded developers exhibit different orthogonal skills, which do not meet each other during the development of AI applications until their validation in an operational environment 2) Tools for automated network design often assume virtually unlimited resources (typically deep networks are trained on cloud- or GPU-based systems) 3) The time-to-market from conception to realization of an AI system is usually quite long. Consequently, mass market adoption of AI technologies at the deep edge is jeopardized.

Our solution is based on Sequential Model Based Optimization (SMBO) – aka Bayesian Optimization (BO) – that is the standard methodology for Automated Machine Learning (AutoML) and Neural Architecture Search (NAS). Although AutoML and NAS are successfully applied on large GPU/Cloud platforms (i.e., some AutoML/NAS tools are commercialized by Google, Amazon and Microsoft), their application is still an issue in the case of tiny devices, such as MCUs. Our approach, instead, includes “deployability” constraints – related to the hardware resources of the MCUs – into the hyperparameter optimization process, leading to this new “AutoTinyML” perspective.

This talk will present our approach, along with its pros and cons with respect to multi-objective optimization (usually adopted to reduce resource usage on cloud). A set of relevant results will be presented and discussed, providing an overview of the next open challenges and perspectives in the AutoTinyML field.