Fundamentals of tinyML Harvard course
Always on ML use cases
Wake-word and keyword spotting
Feature selection and swiss roll projection technique
tinyML FPGA implementation talks about paring down to Accumulator, removing FPU which are some of the challenges of ML in tinyML.
Learning Rate = stepsize
Gradient Descent in tensorflow and tinyML google colab
Fun to run this program
had to add
import tensorflow as tf
import keras
import numpy as np
It took it as
import tensorflow as tf
import numpy as np
from tensorflow import keras
2.2
flatten
activation
relu
softmax
dense
2.3 Exploring Machine Learning Scenarios
CNN
Recurrent Layers
LSTM
Pooling
Feature Extraction
DNN
2.4 Building computer vision model
tinyML colabs
Recognizing handwritten MNIST digits on an Arduino with 2KB RAM using the LogNNet neural network: https://t.co/24RhP9tZRE #tinyML pic.twitter.com/QUwcmKuU8n
— Edge Impulse (@EdgeImpulse) May 18, 2021
Recap
Neural Network
Gradient Descent
Loss Function
Kernels, Filters
CNNs Vs DNNs
Training
Inference
Features
Overfitting
Data Augmentation
Preprocessing
Training Data, Validation Data, Test Data
Classification, Regression
Quantisation
No comments:
Post a Comment