Pages

Net Galley Challenge

Challenge Participant

Sunday, May 16, 2021

Fundamentals of tinyML

 Fundamentals of tinyML Harvard course

Always on ML use cases

Wake-word and keyword spotting

Knowledge distillation

Feature selection and swiss roll projection technique

tinyML FPGA implementation talks about paring down to Accumulator, removing FPU which are some of the challenges of ML in tinyML.

Learning Rate = stepsize

Loss function

tensor flow in google colab

Gradient Descent in tensorflow and tinyML google colab

Types of neural networks

Fun to run this program

had to add 

import tensorflow as tf

import keras
import numpy as np

It took it as

import tensorflow as tf
import numpy as np
from tensorflow import keras



2.2

flatten
activation
relu
softmax
dense

2.3 Exploring Machine Learning Scenarios
CNN
Recurrent Layers
LSTM
Pooling
Feature Extraction

DNN

2.4 Building computer vision model

tinyML colabs


Recap

Neural Network
Gradient Descent
Loss Function
Kernels, Filters
CNNs Vs DNNs
Training
Inference
Features
Overfitting
Data Augmentation
Preprocessing
Training Data, Validation Data, Test Data
Classification, Regression
Quantisation



No comments: