Abstract:Answering natural-language questions is a remarkable ability of search engines. However, brittleness in the state-of-the-art is exposed when handling complexity in question formulation. Such complexity has two distinct dimensions: (i) diversity of expressions which convey the same information need, and, (ii) complexity in the information need itself. We propose initial solutions to these challenges: (i) syntactic differences in question formulation can be tackled with a continuous-learning framework that extends template-based answering with semantic similarity and user feedback; (ii) complexity in information needs can be addressed by stitching pieces of evidence from multiple documents to build a noisy graph, within which answers can be detected using optimal interconnections. The talk will discuss results for these proposals, and conclude with promising open directions in question answering.Biography:He is currently a post-doctoral researcher working with Prof. Gerhard Weikum, in the Databases and Information Systems Group at the Max Planck Institute for Informatics (MPII), Saarbruecken, Germany. His current research areas include question-answering over knowledge bases and text, and user-oriented privacy and transparency in online forums. Prior to MPII, he worked for one and a half years as a Computer Scientist at Adobe Research, Bangalore, India. He completed my PhD as a Microsoft Research India PhD Fellow from the Indian Institute of Technology (IIT) Kharagpur. When not working, he can often be found poring over all kinds of word and logic puzzles.
Abstract:We study quantum algorithms for NP-complete problems whose best classical algorithm is an exponential time application of dynamic programming. We show a simple technique that combines Groverâ????s search with computing a partial dynamic programming table. It allows us to construct an algorithm that solves the travelling salesman problem and minimum set cover in time O*(1.728^n). We use this approach to solve a variety of vertex ordering problems on graphs in the same time O*(1.817^n), and graph bandwidth in time O*(2.946^n). preprint can be accessed at https://arxiv.org/pdf/1807.05209.pdfBiography:Evgeny Vihrov is currently a graduate student at University of Latvia, working at the Centre for Quantum Computer Science supervised by Andris Ambainis. Evgeny's interests lie in Computational Complexity, specifically in the design of quantum algorithms and query complexity. Evgeny received his Masters degree at University of Latvia, Faculty of Computing.
Abstract:Modern autonomous, safety critical and defense systems are complex software systems implemented over heterogeneous and constantly evolving hardware and software platforms. These systems interact among themselves as well as with humans and are expected to successfully execute and handle critical operations. Developing trust in such intelligent software systems or agents requires understanding the full capabilities of the agent, including the boundaries beyond which the agent is not designed to operate. Towards this end, this research presentation will focus towards an application of formal verification to ensure the designed human-agent team accomplishes the assigned task. The approach involves creating an executable specification of the human-machine interaction in a cognitive architecture, which incorporates the expression of learning behavior. The model is then translated into a formal language, where verification and validation activities can occur in an automated fashion. We illustrate our approach through the design of an intelligent copilot that teams with a human in a takeoff operation, while a contingency scenario involving an engine-out is potentially executed. The formal verification and counterexample generation enables increased confidence in the designed procedures and behavior of the intelligent copilot system.Biography:Siddhartha (Sid) Bhattacharyyaâ????s primary area of research expertise/interest is in formal methods for the design, verification and validation of autonomous systems, cyber security, smart power grid, avionics and systems biology. He is presently, a faculty at Florida Institute of Technology in the Computer Engineering and Sciences department. Previously, he had been a Sr. Research Scientist at Rockwell Collins Advanced Technology Center and faculty at Kentucky State University. His research efforts have been funded by National Aeronautics Space Administration (NASA), Defense Advanced Research Project Agency (DARPA), Air Force Research Lab (AFRL) and Office of Naval Research (ONR). He also worked in several collaborative efforts on model-based engineering and analysis with Honeywell, Boeing, Lockheed Martin, Software Engineering Institute Carnegie Melon University and MIT Lincoln Lab. He was a summer research fellow at Applied Research Laboratory at Pennsylvania State University, where he worked on the design, verification, simulation and synthesis of mission control for autonomous underwater vehicle. He also worked as summer faculty fellow at Oak Ridge National Laboratory, where he developed methods for design and analysis of the complex smart power grid. Phd, University of Kenctucky, Lexington KY, USA, 2005 Msc, Iowa State University, Ames, IA, USA, 2003 B.E, BIT Mesra, India, 2001.
Abstract:User Authentication with multiple factors is an emerging trend to secure access to the sensitive information of an organization. Multi-factor Authentication (MFA) being used to genuinely identify authorized users through an authentication process via passwords, security tokens, biometrics, cognitive behavior metrics, software/hardware sensors, etc. The talk will highlight a patented research on Adaptive Multi-Factor Authentication (A-MFA) that uses a combination of passwords, biometrics, cognitive behavior, and other human factors in order to create a trustworthy authentication system that intelligently selects the most appropriate authentication factors at different operating environment so to make authentication strategy unpredictable. A-MFA does this by selecting modalities based on the device in use and surrounding conditions, which provides the perfect balance between powerful security and low-maintenance usability. That creates a secure, highly confident authentication framework that you can trust. There exist many potential applications of A-MFA areas including the Financial, Healthcare, and e-governance. In particular, secure online banking, online testing in education and training settings, secure access to Electronic Medical Records, access to sensitive sites by government employees and others.Biography:Dr. Dipankar Dasgupta is a Professor of Computer Science at the University of Memphis since 1997. Dr. Dasgupta is at the forefront of research in applying bio-inspired and ML-based approaches to cyber defense. Some of his groundbreaking works, like digital immunity, negative authentication, cloud insurance model, and Auth-Spectrum put his name in Computer World Magazine and other News media. Prof. Dasgupta is an Advisory Board member of Geospatial Data Center (GDC), Massachusetts Institute of Technology since 2010, and worked on joint research projects with MIT. His latest textbook on Advances in User Authentication is published by Springer-Verlag, August, 2017. Dr. Dasgupta has more than 250 publications with 15000+ citations and having h-index of 57 as per Google scholar. He received five Best Paper Awards at international conferences (1996, 2006, 2009, 2012 and 2017) and two Best Runner-Up Paper Awards (2013 and 2014). He is the recipient of 2012 Willard R. Sparks Eminent Faculty Award, the highest distinction and most prestigious honor given to a faculty member by the University of Memphis. Prof. Dasgupta received the 2014 ACM SIGEVO Impact Award, and also designated as an ACM Distinguished Speaker. Since 2007, he has been organizing Symposium on Computational Intelligence in Cyber Security (CICS) at the IEEE Symposium Series on Computational Intelligence (SSCI).
Abstract:In an increasingly polarized world, demagogues who reduce complexity down to simple arguments based on emotion are gaining in popularity. Are opinions and online discussions falling into demagoguery? In this work, we aim to provide computational tools to investigate this question and, by doing so, explore the nature and complexity of online discussions and their space of opinions, uncovering where each participant lies. More specifically, we present a modeling framework to construct latent representations of opinions in online discussions which are consistent with human judgments, as measured by online voting. If two opinions are close in the resulting latent space of opinions, it is because humans think they are similar. Our framework is theoretically grounded and establishes a surprising connection between opinion and voting models and the sign-rank of matrices. Moreover, it also provides a set of practical algorithms to both estimate the dimensionality of the latent space of opinions and infer where opinions expressed by the participants of an online discussion lie in this space. Experiments on a large dataset from Yahoo! News, Yahoo! Finance, Yahoo! Sports, and the Newsroom app show that many discussions are multisided, reveal a positive correlation between the complexity of a discussion, its linguistic diversity and its level of controversy, and show that our framework may be able to circumvent language nuances such as sarcasm or humor by relying on human judgments instead of textual analysis.Biography:Manuel Gomez Rodriguez is a tenure-track faculty at Max Planck Institute for Software Systems. Manuel develops machine learning and large-scale data mining methods for the analysis, modeling and control of large social and information systems. He is particularly interested in the creation, acquisition and/or dissemination of reliable knowledge and information, which is ubiquitous in the Web and social media, and has received several recognitions for his research, including an Outstanding Paper Award at NIPSâ????13 and a Best Research Paper Honorable Mention at KDDâ????10 and WWWâ????17. Manuel holds a BS in Electrical Engineering from Carlos III University in Madrid (Spain), a MS and PhD in Electrical Engineering from Stanford University, and has received postdoctoral training at the Max Planck Institute for Intelligent Systems. You can find more about him at http://learning.mpi-sws.org.
Abstract:Climatic conditions have a profound impact on the lives of a billion people in India. However, several questions related to Indian climate are still unanswered to climate scientists. The widespread availability of high-quality climatic data along with advances in Data Science and Machine Learning have opened up scope for a data-driven approach to these problems. Climatic data is spatio-temporal in nature, where each climatic variable (temperature, rainfall, wind speed etc) is measured at different locations and time-points. Climatic processes can occur at different spatial and temporal scales, but usually each process spans many spatial and temporal locations. We build a spatio-temporal model based on Markov Random Field, where climatic processes are encoded by discrete latent variables and spatio-temporal coherence maintained through edge potential functions. This model is used as the basis of our approach to three problems â????S detection of large-scale anomaly events, daily rainfall simulation and understanding spatio-temporal dynamics of Indian Monsoon. A slight deviation (anomaly) from normal conditions (climatology) can have severe impacts. In India, the most significant impacts are caused by excess/deficient rainfall and excess temperature. The most significant anomalies are those that extend over a large area and/or persist for a long time, which we call â???anomaly eventsâ??. Detection and localizing such events in space and time is a challenge, and we try to address it using our proposed models. The model also allows us to estimate several statistical properties at local and regional scale, which are in turn used as inputs to stochastic models to simulate daily climatic conditions across India. We show that our simulations reflect spatio-temporal properties of the process much more accurately than simulations done by dynamical models used by climate scientists. We also identify homogeneous zones across the landmass, within which rainfall simulation may be carried out more accurately. Additionally, we use our framework to identify common spatial and temporal patterns of rainfall over India during monsoon season, which provide important insights into the dynamics of this highly complex phenomena, and creates some scope for rainfall prediction at local and regional scale.Biography:Dr. Adway Mitra is currently an assistant professor of Computer Science and Engineering in IIT, Bhubaneswar. Prior to this, he was a postdoctoral fellow in International Center for Theoretical Sciences (ICTS-TIFR) in Bangalore. He received his Ph.D. from Indian Institute of Science in 2016, under the guidance of Prof. Chiranjib Bhattacharyya. His primary research interest is in modeling complex spatio-temporal processes, especially related to climate, using Statistics, Data Mining and Machine Learning
Abstract:Graphs are ubiquitous in representing linked data such as call data in telecommunication networks, friendship connections in social networks, links between similar proteins in protein networks etc. Dense components in these networks give us important information such as communities, protein complexes etc. Often, the networks are dynamic where the underlying topology of the network changes over time when new edges/vertices are added and existing edges/vertices are removed. For example, in a friendship network, new edges are added when unknown persons become friends, and existing edges are deleted when old friends separate. Dense subgraphs are tightly connected where each node have a high degree within the subgraphs and these subgraphs change over time due to the addition/deletion of edges. In this research, we consider the maintenance of fundamental dense subgraphs such as maximal cliques and maximal bicliques in a dynamic graph. Note that the number of maximal cliques and maximal bicliques in a graph is of exponential order on the number of vertices of the graph. Moreover, the changes in these dense subgraphs can be very small as well as very large when the graph changes slightly due to the addition/deletion of edges. Thus, an efficient technique for enumerating the changes in these dense subgraphs in the event of graph update should have a property that the time spent for enumerating the changes is proportional to the size of changes in the dense subgraphs. We design efficient algorithms for the maintenance of maximal cliques/maximal bicliques. Our algorithms are efficient in the sense that the time complexity of maintaining the changes in the dense subgraphs is proportional to the size of the changes. We experimentally show that our algorithm for incremental maintenance of maximal cliques is 100-1000 times faster than prior works on the same problem and the algorithm for incremental maintenance of maximal bicliques is more than 100 times faster than the baseline algorithm for the same problem. For the maximal biclique maintenance, we compare with the baseline algorithm because there are no prior works on the maintenance of the maximal bicliques. Next, we consider the parallel algorithm for maximal clique enumeration (MCE) problem that is to enumerate all maximal cliques from a static graph. We design theoretically work-efficient shared-memory parallel algorithm for MCE motivated by the following facts: (1) The number of maximal cliques in a graph can be exponential and thus enumerating those using any sequential algorithm needs an exponential amount of time in the worst case. Thus a way to speedup the enumeration process is to add parallelism in the enumeration process; (2) The state-of-the-art parallel algorithms for MCE are not theoretically work-efficient. We use the work-depth model of computation for analyzing our algorithm. Next, We empirically evaluate the algorithms on large real-world graphs and show that our parallel algorithm achieves 15x-31x parallel speedup when compared with a practically most efficient sequential algorithm for MCE on a single 32 core intel machine with 256G RAM.Biography:Apurba Das is a PhD candidate in Electrical and Computer Engineering Department at Iowa State University working with Dr. Srikanta Tirthapura from Spring 2014 in the area of incremental and parallel algorithm design for graph analytics. Prior to that, he was working as a software development engineer at Ixia Technology for 2 years after completing MTech in Computer Science in 2011 from Indian Statistical Institute, Kolkata, India and BTech in Computer Science in 2008 from IIEST, Shibpur (formerly BESU), India.
Abstract:User engagement in online social networking depends critically on the level of social activity in the corresponding platformâ??⬝the number of online actions, such as posts, shares or replies, taken by their users. Can we design data-driven algorithms to increase social activity? At a user level, such algorithms may increase activity by helping users decide when to take an action to be more likely to be noticed by their peers. At a network level, they may increase activity by incentivizing a few influential users to take more actions, which in turn will trigger additional actions by other users. In this work, we model social activity using the framework of marked temporal point processes, derive an alternate representation of these processes using stochastic differential equations (SDEs) with jumps and, exploiting this alternate representation, develop two efficient online algorithms with provable guarantees to steer social activity both at a user and at a network level. In doing so, we establish a previously unexplored connection between optimal control of jump SDEs and doubly stochastic marked temporal point processes, which is of independent interest. Finally, we experiment both with synthetic and real data gathered from Twitter and show that our algorithms consistently steer social activity more effectively than the state of the art.Biography:Abir De is currently a post-doctoral researcher in the Network Learning group in MPI for Software Systems, Kaiserslautern Germany. Prior to this, he did his PhD in the Department of Computer Science and Engineering in IIT Kharagpur, and is expected to get the PhD degree in July 2018. During his PhD, he was a part of the Complex Network Research Group (CNeRG) at IIT Kharagpur, where he worked on modeling and learning influence in social networks. His PhD work was supported by Google India PhD Fellowship 2013. Prior to PhD, he did his BTech in Electrical Engineering and MTech in Control Systems Engineering both from IIT Kharagpur in 2011.
Biography:Mr. Vigyan Singhal serves as the Chairman at Oski Technology Inc. and has been its Chief Oski since April 2018. Mr. Singhal served as the Chief Executive Officer and President of Oski Technology Inc. and is responsible for overall leadership. Mr. Singhal also serves as an ASIC design and verification Consultant at many startups and established companies. He served as Chief Executive Officer and President of Elastix Corporation. Mr. Singhal served as Vice President of Engineering at Jasper Design Automation, Inc. Previously, Mr. Singhal was employed at Cadence Design Systems, where he invented the technology for the Cadence Heck Equivalence Checker. He served as Chief Technology Officer and founding Chief Executive Officer of Jasper Design Automation. He also founded Oski Technology in 2005. He was an Entrepreneur-in-Residence at Foundation Capital. He started his career as a Research Scientist at Cadence Berkeley Labs. He serves on Technical Advisory Boards at Cypress Semiconductor and PwrLite. He served as a Director of Jasper Design Automation, Inc. He teaches a class on Design Verification at UC Extension. Mr. Singhal has published more than 50 papers in research conferences and journals and holds holds 12 patents in IC design and verification. He received a PhD in EE & Computer Science from the University of California at Berkeley, where he was a Regents Scholar and a BTech in Computer Science from the Indian Institute of Technology, Kanpur, India.
Abstract:Amazon is heavily investing on Machine Learning to create delightful customer experience. In this talk, we would look into various projects where ML algorithms are leveraged like product recommendation, demand forecasting, product search and matching, information extraction from reviews, visual search and so on. This talk aims to cover the breadth of ML applications at Amazon.Biography:Parth Gupta (http://users.dsic.upv.es/~pgupta/) is an ML Scientist at Amazon. He holds PhD in machine learning and information retrieval. Parth has extensively published in international conferences and journals and serves on various research committees. Through his academic background and experiences at places like Microsoft Research, Microsoft Bing and Amazon, he solves interesting ML problems and contributes to the applied ML research. He is also an open-source enthusiast and often contributes through Google Summer of Code program.
Biography:Ahmet Kondoz (Mâ????91, SMâ????11) received his PhD degree in 1987 from University of Surrey, UK. From 1986 to 1988, he was employed as a research fellow in the communication systems research group. He became a lecturer in 1988, reader in 1995, and in 1996 he was promoted to professor in multimedia communication systems. He was the founding head of I-LAB, a multi-disciplinary multimedia communication systems research lab at the University of Surrey. Since January 2014, he has been appointed as the founding Director of the Institute for Digital Technologies, at Loughborough University London, a post graduate teaching, research and enterprise institute. He is also serving as the Associate Dean for Research at Loughborough University London. His research interests include digital signal processing and coding, fixed and mobile multimedia communication systems, 3D immersive media applications for the future Internet systems, smart systems such as autonomous vehicles and assistive technologies, big data analytics and visualisation and related cyber security systems. He has over 400 publications, including six books, several book chapters, and seven patents, and graduated more than 75 PhD students. He has been a consultant for major wireless media industries and has been acting as an advisor for various international governmental departments, research councils and patent attorneys. Dr. Kondoz has been involved with several European Commission FP6 & FP7 research and development projects, such as NEWCOM, e-SENSE, SUIT, VISNET, MUSCADE, etc. involving leading universities, research institutes and industrial organisations across Europe. He coordinated FP6 VISNET II NoE, FP7 DIOMEDES STREP and ROMEO IP projects, involving many leading organisations across Europe which deals with the hybrid delivery of high quality 3D immersive media to remote collaborating users including those with mobile terminals. He co-chaired the European networked media advisory task force, and contributed to the Future Media and 3D Internet activities to support the European Commission in the FP7 programmes. http://www.lborolondon.ac.uk/about/staff/ahmet-kondoz/ Rahulamathavan received a B.Sc. degree (first-class honors) in electronic and telecommunication engineering from the University of Moratuwa, Sri Lanka, in 2008, and a Ph.D. degree in signal processing from Loughborough University, the UK in 2011. From April 2008 to September 2008, he was an Engineer at Sri Lanka Telecom, Sri Lanka and from November 2011 to March 2012, he was a Research Assistant with the Advanced Signal Processing Group, School of Electronic, Electrical and Systems Engineering, Loughborough University, UK. He has worked as a Research Fellow with the Information Security Group, School of Engineering and Mathematical Sciences, City University London, UK. Moreover, Dr. Rahulamathavan received a scholarship from Loughborough University to pursue his Ph.D. degree. He is currently working as a Faculty member with Loughborough University, UK. Currently he is coordinating UK-India project (worth of ?£200, 000) between Loughborough University London and IIT Kharagpur. His research interests include signal processing, machine learning and information security and privacy. http://www.drrahul.uk/
Abstract:Gyori(in 1976) and Lovasz(in 1977) proved independently the following beautiful theorem poluparly called the Gyori-Lovasz Theorem: "Let $G$ be a k-vertex-connected graph on $n$ vertices. Let $v_1, v_2, ... , v_k$ be a set of $k$ designated vertices. Let $n_1, n_2, ... ,n_k$ be a set of k natural numbers such that $n_1 + ... + n_k = n$. Then, the graph $G$ can be partitioned into $k$ connected induced subgraphs, namely $S_1, ... , S_k$, such that each subgraph $S_i$ has exactly $n_i$ vertices with the vertex $v_i$ in it." In this talk, Prof. Sunil Chandran will talk about a recent work involving him which is a generalization of the Gyori-Lovasz Theorem on weighted (on vertices) graphs. This generalized Gyori-Lovasz Theorem has applications in problems in combinatorial optimization. He uses it for improving the previously-known best bound on the 'spanning-tree-congestion problem'. The theorem may even find applications in problems in social network theory.Biography:Prof. L. Sunil Chandran is a professor in the Department of Computer Science & Automation, Indian Institute of Science Bangalore. His areas of research are graph theory and combinatorics. He has around 75 journal publications in reputed international journals. Prof. Chandran is a fellow of Indian National Academy of Engineering, a member of National Academy of Sciences, an associate of Indian Academy of Sciences, and has won several awards/fellowships like the Humboldt Fellowship (for experiences researchers), the NASI-SCOPUS young scientist award, MSR India Outstanding Young faculty award (2009-10) by IISc, NASI young Scientist Platinum Jubilee Award, etc.
Biography:I am an Associate Professor at the Computer Science Department at the University of Arizona. I won the NSF CAREER award (2004), to work on "Pattern Matching, Realistic Input Models and Sensor Placement. Useful Algorithms in Computational Geometry." I am on the editorial board of IJCGA and of JDA. I was a program committee member of SoCG05, SoCG13, Broadnets06, 07, 08, 09, ICC WAS ACM GIS 2007, 08, 09, 11,12, 13,14, 15, ALGOSENSORS 08 and 11, FOCS 09, INFOCOM 09 10 and 11, MILCOM 14,15. DRCN 15, 16, VISIGRAPP 15. I was the co-chair of the sensor network algorithms track of ALGOSENSORS14 and a co-organizers of the workshop on Geometric Optimization in Wireless Communication and Sensing, in conjunction with SoCG 14.
Abstract:The proliferation of connected devices in the computing landscape has opened immense possibilities to harness the potential of inherently distributed multi-modal sensor platforms (aka Internet of Things - IoT platforms) for societal benefits. Large-scale situation awareness applications are envisioned to utilize the sensing infrastructure to convert the sensed information to actionable knowledge. Applications like connected vehicles and smart cities generate high volumes of data and are sensitive to response time. Traditional cloud-based centralized processing of these data streams is detrimental due to the high latency and unpredictable network throughput of wide-area networks. These reasons suggest that processing should take place at the edge of the network in a geo-distributed manner near the end-devices. Fog computing, a term first coined by Cisco, envisions extending the utility computing model of the cloud to the edge of the network, for instance in network routers and city-scale micro-datacenters. Our research is to take a holistic view of the resource management issues involved in realizing this paradigm of computing. Particularly, the issues to tackle are the allocation of computational resources among applications, management of IoT infrastructure at scale across multiple applications, and management of application state in fault-tolerant and latency-aware manner. In this talk I would talk about the projects on this theme being worked on at Embedded Pervasive Lab, Georgia Tech and encourage future graduate study applicants to apply there.Biography:Harshit Gupta did his BTech in CSE from IIT Kharagpur. He is currently a PhD student in Georgia Tech University.He would like to interact with students interested to pursue PhD in Georgia Tech University.
Abstract:Password checking systems traditionally allow login only if the correct password is submitted. Login rejection due to small mistakes is frustrating, delays login for genuine users. In this talk, I will discuss two methods for tolerating small typographical mistakes in passwords that improve the usability of passwords while negligibly hampering the security of it. The first method uses a population-wide password typing statistics to identify typos that are easily correctable and are prevalent among users. If the submitted password is incorrect, the password checker will attempt to correct those typos on the fly. I will show that nearly 9% of all typing mistakes can be corrected in this method by correcting only three kinds of typos, such as accidentally leaving caps lock on, incorrect capitalization of the first letter in a password, and adding an extraneous character at the end of the password. Via a 24-hour measurement study in the live Dropbox authentication system, we found a huge benefit of correcting such simple typos: 20% of users, who made at least one mistake in submitting passwords can log in one minute early, and 3% more users can log in to Dropbox. In the second method, I will introduce the notion of personalized typo-tolerance: tolerate typos that a user makes frequently. We design a new kind of password checker, called TypTop, that adaptively learns what typos a user makes and allows the user to log in with a frequent subset of it. Underlying TypTop is a new stateful password-based encryption scheme that can be used to store recent failed login attempts. A prototype of TypTop is implemented for Linux and Mac OS login, which can be found at https://typtop.info. Finally, I will formally analyze the security of these typo-tolerant password checkers in the face of an attacker compromising the password checking server or attempting to impersonate a user by guessing their password. I show that in such scenarios the security remains the same as traditional (non-typo-tolerant) password checkers. This is a joint work with my co-researchers Devdatta Akhawe, Anish Athalye, Anusha Chowdhury, Ari Juels, Yuval Pnueli, Thomas Ristenpart, and Joanne Woodage. Part of this work obtained distinguished student paper award in IEEE S&P in 2016.Biography:Rahul is a PhD candidate at Cornell University. Prior to joining Cornell, he completed his Masters from the University of Wisconsin-Madison in 2015 and his Bachelors from IIT Kharagpur in 2012. He has interned at Microsoft Research and Dropbox in Summer 2015 and Summer 2016 respectively. His interests are in building secure usable systems and formally analyzing their security. The primary focus of his doctoral research is improving passwords. He is also working on securing storage of large biometric datasets and preventing (ab)use of smart-phone apps for domestic violence.