ML algorithm cheat sheet

A #ML algorithm cheat sheet - helping narrow down to a certain set of #algorithm grouping depending on the problem at hand and what we are trying to solve from a business perspective. ML algorithm cheat sheet Figure 2 shows what additional characteristics we need to consider when choosing the right ML algorithm for your situation at hand. This is something that cannot be generic and is very situational....

May 3, 2021 · 1 min · Amit Bahree

bfloat16 - how it improves AI chip designs

Floating point calculations are slow for computers (specifically CPUs); possibly representing the same struggle for many humans. :) I remember a time when a FPU (floating point unit) was an upgrade and one had to pay extra to get one. Very useful when you needed that extra precision in computing - and in my head, it always seemed like the Turbo button. :) For most #ML workloads and computations, precision isn’t the most important criteria; with every increasing data and parameters (looking at you GPT-3 with 45 TB of data and 175 billion parameters!...

September 12, 2020 · 2 min · Amit Bahree

ML Algorithms

Sometimes one needs a quick snapshot of what are the options to think through and I really like this for that. Machine Learning Algorithms

June 13, 2019 · 1 min · Amit Bahree

Machine Learning 101

May 16, 2019 · 0 min · Amit Bahree

Python

April 18, 2019 · 0 min · Amit Bahree

Roots of #AI

The naming is unfortunate when talking about #AI. There isn’t anything about intelligence - not as we humans know of it. If we can rewind back to the 50’s we can perhaps rename it to something like Computational Intelligence, which is more accurate. And although I have outlined the difference between some of the elements of AI in the past, I wanted to get back to what the intent was and how this area started....

November 12, 2018 · 4 min · Amit Bahree

#ML concepts - Regularization, a primer

Regularization is a fundamental concept in Machine Learning (#ML) and is generally used with activation functions . It is the key technique that help with overfitting. Overfitting is when an algorithm or model ‘fits’ the training data too well - it seems to good to be true. Essentially overfitting is when a model being trained, learns the noise in the data instead of ignoring it. If we allow overfitting, then the network only uses (or is more heavily influenced) by a subset of the input (the larger peaks), and doesn’t factor in all the input....

September 29, 2018 · 4 min · Amit Bahree

Neural Network - Cheat Sheet

Neural Networks, today, help in a great set of tasks, that until very recently wasn’t possible at all - be it from computer vision, to medical diagnosis, to speech translation and forms a key cornerstone to a lot of ‘magic’ that Machine Learning and AI offers today. I did blog about Neural Network types (and MarI/O) sometime back ; I surely cannot take credit for creating these three cheat sheets but they are awesome and hope you get to use and enjoy them too....

September 11, 2018 · 1 min · Amit Bahree

#ML training data

Seem like my training data for the car - perhaps a hint of #bias. 😂 #GeekyJokes #ML #AIJokes

June 15, 2018 · 1 min · Amit Bahree

Neural network basics & Activation functions

Neural networks have a very interesting aspect – they can be viewed as a simple mathematical model that defines a function. For a given function $f(x)$ which can take any input value of $x$, there will be some kind a neural network satisfying that function. This hypothesis was proven almost 20 years ago (“ Approximation by Superpositions of a Sigmoidal Function ” and “ Multilayer feedforward networks are universal approximators ”) and forms the basis of much of #AI and  #ML use cases possible ....

June 12, 2018 · 8 min · Amit Bahree