A classical application of similarity search is in recommender systems: Suppose you have shown interest in a particular item, for example a news article x. The semantic meaning of a piece of text can be represented as a high-dimensional feature vector, for example computed using latent…
The mean is called a measure of central tendency because it tells us something about the center of a distribution, specifically its center of mass. Other measures of central tendency that are commonly used in statistics are the median and the mode, which we now define.
While Scikit-Learn offers two functions for implementing Hyperparameters Tuning with k-fold Cross Validations: GridSearchCV and RandomizedSearchCV, this post is about implementing a simplified version of RandomizedSearchCV with only pure Python.
My YouTube Video explaining the full code for implementing RandomizedSearchCV From Scratch without scikit-learn
The basics of cross validation are as follows: the model is trained on a training set and then evaluated once per validation set. Then the evaluations are averaged out. This method provides higher accuracy. …
Batch normalization was introduced by Google scientists Sergey Ioffe and Christian Szegedy in 2015. Their insight was as simple as it was groundbreaking. Just as we normalize network inputs, they proposed to normalize the inputs to each layer, for each training mini-batch as it flows through the network.
Link to my Kaggle Notebook with full codes.
Link to my Youtube Video Explaining the whole flow of building DCGAN from scratch
Facial Attribute prediction is a Computer Vision (CV) task about deducing the set of attributes belonging to a face. …
This quick post is an introduction to my Youtube video discussing the pioneering Paper “Towards Real-World Blind Face Restoration with Generative Facial Prior” or GFP GAN in short.
The Paper was very recently published in June-2021 by Xintao Wang Yu Li Honglun Zhang Ying Shan of…
This is a brief post introducing my Youtube Video on building Neural Network from scratch with pure Python. For the full explanations and code implementation please watch the video.
For this example, I have a really simple Neural Network architecture which is the following.
From TensorFlow 2.1, it has allowed for mixed-precision training, making use of the Tensor Cores available in the most recent NVidia GPUs.
My Youtube video explaining the flow.
One way to describe mixed-precision training, in TensorFlow could go like this: MPT (Mixed Precision Training) lets you train models where…
In this post, I will go over the mathematical need and the derivation of Chain Rule in a Backpropagation process.
First, for this post, I will consider a really simple Neural Network architecture which is the following.