A classical application of similarity search is in recommender systems: Suppose you have shown interest in a particular item, for example a news article x. The semantic meaning of a piece of text can be represented as a high-dimensional feature vector, for example computed using latent semantic indexing. In order to recommend other news articles we might search the set P of article feature vectors for articles that are “close” to x.

In this case, for a large textual dataset containing millions of words, the problem is there may be far too many pairs of items…

Batch normalization was introduced by Google scientists Sergey Ioffe and Christian Szegedy in 2015. Their insight was as simple as it was groundbreaking. Just as we normalize network inputs, they proposed to normalize the inputs to each layer, for each training mini-batch as it flows through the network.

See “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” by Sergey Ioffe and Christian Szegedy, 2015, https://arxiv.org/abs/1502.03167.

One common challenge when training a deep neural network is ensuring that the weights of the network remain within a reasonable range of values — if they start to become too large…

**Link to my Kaggle Notebook** with full codes.

**Link to my Youtube Video Explaining the whole f****low of building DCGAN from scratch**

Facial Attribute prediction is a Computer Vision (CV) task about deducing the set of attributes belonging to a face. Example attributes are the color of hair, hairstyle, age, gender, etc.

Facial attribute analysis has received considerable attention when deep learning techniques made remarkable breakthroughs in this field over the past few years.

Deep learning based facial attribute analysis consists of two basic sub-issues:

facial attribute estimation (FAE), which recognizes whether facial attributes are present in given images…

This quick post is an introduction to my Youtube video discussing the pioneering Paper “**Towards Real-World Blind Face Restoration with Generative Facial Prior**” or GFP GAN in short.

The Paper was very recently published in June-2021 by Xintao Wang Yu Li Honglun Zhang Ying Shan of Applied Research Center (ARC), Tencent PCG.

For the full discussions please see my below YouTube video.

From the Paper

Overview of GFP-GAN Framework

It consists of a degradation removal module (U-Net) and a pretrained face GAN as facial prior.

They are bridged by a latent code mapping and several…

This is a brief post introducing my Youtube Video on building Neural Network from scratch with pure Python. **For the full explanations and code implementation please watch the video**.

For this example, I have a really simple Neural Network architecture which is the following.

From TensorFlow 2.1, it has allowed for mixed-precision training, making use of the Tensor Cores available in the most recent NVidia GPUs.

My Youtube video explaining the flow.

One way to describe mixed-precision training, in TensorFlow could go like this: **MPT (Mixed Precision Training)** lets you train models where the weights are of type float32 or float64, as usual (for reasons of numeric stability), but the data — the tensors pushed between operations — have lower precision, namely, 16bit (float16).

Some of the benefits are, faster Model training with compatible GPU, and because it use 16bits it will allow…

In this post, I will go over the mathematical need and the derivation of Chain Rule in a Backpropagation process.

**Link to my Youtube Video explaining the entire flow.**

First, for this post, I will consider a really simple Neural Network architecture which is the following.

In the **first part**** **of this Blog series on Kaggle Competition** **for **G2Net Gravitational Wave Detection**** **I discussed the introduction on Gravitational waves, fundamentals of digital signal processing.

In this part-2, I will be doing simple EDA on this dataset and building a baseline ConvNet Model with Keras.

**My YouTube Video Explaining the model building for Kaggle Submission.**

In this competition, you are provided with a training set of time series data containing simulated gravitational wave measurements from a network of 3 gravitational wave interferometers (LIGO Hanford, LIGO Livingston, and Virgo). Each time series…

**My YouTube Video Explaining the model building for Kaggle Submission for the Gravitational Wave Competition which includes Constant-Q transform related EDA**

The **constant quality factor transform (CQT),** introduced by J.C. Brown in 1988, is an interesting alternative to the windowed Fourier transform (STFT / Short Time Fourier Transform) or wavelets, for time-frequency analysis.

The constant-Q transform transforms a data series to the frequency domain. It is related to the Fourier transform.

In general, the transform is well suited to musical data and proves useful where frequencies span several octaves. It is more useful in the identification of instruments.

Unlike…

This is the first part of a series on tackling the Kaggle Competition** **for **G2Net Gravitational Wave Detection.**

**My YouTube Video Explaining the model building for Kaggle Submission for the Gravitational Wave Competition.**

In this part-1, I shall go through the introduction on Gravitational waves, fundamentals of digital signal processing which is required to model gravitational waves, and how Machine-Learning and Deep-Learning have become one of the most crucial tool now to handle this fascinating phenomenon that was first proposed by Einstein himself in his landmark paper in 1916.

In the June of 1916, Einstein presented to the Prussian…

ComputerVision | NLP | Kaggle Master. Ex International Financial Analyst. Linkedin — https://bit.ly/3yBFni6 | My Youtube Channel — https://bit.ly/3zGNvzc