Aditya Rastogi
I am a final year dual-degree (5-year integrated bachelor's and master's program) student in the Department of Computer Science and Engineering at IIT Kharagpur. I am fortunate to have been advised by Prof. Partha Pratim Chakrabarti and Prof. Aritra Hazra for my bachelor's and master's thesis.
I am interested in developing agents that can reason in the real world. I want to develop models that can do visual reasoning, in addition to the current tasks that they are good at. Presently, I am working on using attention to improve self-supervised learning models.
I have done research internships at UBC, Vancouver and the University of Sydney. I worked on large scale pattern matching and facial landmarks detection, respectively, in these internships. I have also interned at Goldman Sachs in Summer 2020. I like to explain the things I read, clearly, through my blog posts and/or through visualizations.
Email  / 
CV  / 
Twitter  / 
Github  / 
LinkedIn
|
|
|
Reducing computational constraints in SimCLR using Momentum Contrast V2 (MoCo-V2) in PyTorch
Aug, 2020
The SimCLR framework requires large batch-sizes to form a good representation space, because the negative pairs are generated in the same batch. Momentum Contrast V2 combines SimCLR's design improvements with the original MoCo framework to do self-supervised learning with less computational costs.
|
|
Understanding SimCLR — A Simple Framework for Contrastive Learning of Visual Representations with Code
Apr, 2020
SimCLR is a simple framework for contrastive learning of visual representations. It showed that composition of data augmentations like color jittering and random crop play a critical role in learning good visual representations.
|
|
The GNU Toolchain Explained
Mar, 2020
The GNU Toolchain is a set of programming tools in Linux systems that programmers can use to make and compile their code to produce a program or a library. This post explains this toolchain that contains GNU m4, GNU Make, GNU Bison, GCC, GNU Binutils, GNU Debugger and the GNU build system.
|
|
Visualizing Neural Networks using Saliency Maps in PyTorch
Jan, 2020
This post discusses a simple gradient approach ( by Simonyan et. al) to obtain saliency maps for a trained neural network in PyTorch.
|
|
Solving Racetrack in Reinforcement Learning using Monte Carlo Control
Jan, 2020
This post solves the racetrack problem in reinforcement learning in a detailed step-by-step manner. It starts with constructing the racetrack environment in Pygame and then proceeds with solving this problem with the off-policy Monte Carlo control algorithm.
|
|
Elucidating Policy Iteration in Reinforcement Learning — Jack’s Car Rental Problem
Oct, 2019
This post explains the policy iteration algorithm in Reinforcement Learning and uses it to solve Jack’s car rental problem given in the Sutton & Barto book.
|
|
Genetic algorithm to navigate in a 2D environment
This is a simulation in which the goal of the rockets is to reach the yellow circle while avoiding the obstacles and the borders along their path. This page shows how a genetic algorithm solves this path planning problem.
|
|
The Expectation-Maximization Algorithm
This page is an implementation of the Expectation-Maximization (EM) algorithm from scratch. It fits mixture gaussian density to the points provided on the screen.
|
|
15 puzzle
This is the classic 15 puzzle problem. I became quite interested in solving it in as less number of steps as I could. The goal is to place the tiles in order, by making moves that use the empty space. I find it quite interesting how we humans can find good heuristic functions based on both logic and intuition to navigate in the state space to arrive at the goal state and that too with constrained memory.
|
|
Diffusion
Diffusion of particles simulated by a random 2D walk. You can click on the screen to create a bunch of particles.
|
|
Simulating forces
In this system, same colored particles repel and different colors attract each other. One can click on the screen to create a bunch of particles. One can also change attraction and repulsion constants to change the magnitude of corresponding forces. The forces follow the inverse square law.
|
|
Flappy Fish
Just like flappy bird, the goal is to swim between columns of pipes without hitting them. The spacebar can be used to control the fish.
|
|
Falling Blocks
You need to save your block from the other falling blocks in this game. Increase your score and climb up the levels. The falling blocks follow Poisson distribution to ensure that the occurrence of one falling event does not affect the probability that a second falling event will occur. The expectation of this distribution increases as the game progresses.
|
|
Convex Hull
An implementation of the Gift Wrapping algorithm for convex hull.
|
|
Perlin Noise flow-field visualization
Perlin noise is a type of gradient noise used by visual effects artists to increase the appearance of realism in computer graphics. This page uses a 2D perlin noise distribution to create a force field in which a thousand particles are dropped on the screen and their locus is displayed to create beautiful designs.
|
|