Vasisht Raghavendra
Tagline:Imbibing Knowledge to Digital Assistants at Oracle
About
Experienced Data Scientist with expertise in NLP, Deep Learning, Statistical Data Analysis and general Machine Learning. Self taught in the field of financial markets.
Education
Master of Science (M.S.)
from: 2014, until: 2016Field of study:Computer Science(Data Science)School:University of Southern California
Bachelor of Engineering (B.E.)
from: 2009, until: 2013Field of study:Information ScienceSchool:Visvesvaraya Technological University
Work Experiences
Applied Scientist 3
from: 2023, until: presentOrganization:OracleLocation:Greater Vancouver, British Columbia, Canada
Senior Applied Data Scientist
from: 2021, until: 2023Organization:Oracle
Senior Data Scientist
from: 2019, until: 2021Organization:Yewno, Inc.
Description:Extracting Insights from Alternate Data.
Data Scientist
from: 2018, until: 2019Organization:Yewno, Inc.Location:Redwood City, California
Description:Working on developing algorithms for Named Entity disambiguation from text using semantic search and other pertinent measures and deploying these solutions on a corpus of 3 billion text snippets.
Data Scientist
from: 2016, until: 2017Organization:ImbellusLocation:Los Angeles
Description:• Performed ad-hoc analysis on the raw data to find interesting patterns and discover features • Created interactive visualizations using Python libraries and D3js to communicate the result of the findings • Engineered features from the raw data to help infer cognitive capabilities in a simulation based assessment • Set up the data pipeline to collect and store the telemetry data from the simulation to a PostgreSQL database • Created API endpoints using Flask for the communication between the simulation client and the database • Analyzed the client’s data to learn about their current model of evaluation of candidates
Data Analytics Intern
from: 2016, until: 2016Organization:VerimatrixLocation:San Diego
Description:As a part of the R&D Department analyzed the Conditional Access System (VCAS) server logs to identify fraudulent behavior. The main goal of this project was to analyze the logs, create norms on the data and find the anomalies. An anomaly is an instance of fraud and hacking. The tasks included parsing the logs to extract important attributes using Python, building models to identify the suspicious behaviors and to scale it using a distributed environment (Spark and Hadoop). Helped the global services team to focus on potential market by analyzing their data to report the sales and the revenue earned. Analyzed data collected from Periscope application to build a proof of concept to show the increase in views during important events.
Data science intern
from: 2015, until: 2015Organization:Pinsight Media+ (powered by Sprint)Location:Kansas City, MO
Description:Developed a model for identity resolution on the different datasets to identify unique persons and to create a unique ID for them. The data was cleaned and prepared by using Hive and Python and a framework was built using Java on Spark framework. Graphlab was used to generate sample models and R and Pandas were used to analyze and generate reports.
Intern
from: 2013, until: 2014Organization:CiberLocation:Bengaluru Area, India
Description:Worked on the SAP Basis/Netweaver technology and underwent trainings on SAP Solutions and System Landscape, Installation Basics, Database concepts, Upgrades, Java Administration, Solution Manager, High Availability concept, System Monitoring etc. Later, was a part of a team to support multiple clients. My responsibilities included monitoring the SAP systems health by running the daily monitoring T Codes and in the event of any failure, fix the system by making suitable changes and generate a report.
Software Intern
from: 2013, until: 2013Organization:Bizosys TechnologiesLocation:Bengaluru Area, India
Description:Worked in a start-up environment as a part of a live project, ShopGirl, which involved crawling European fashion portal sites using Nutch crawler to collect the data about their various products. The ultimate goal of the project was to create a master portal of the crawled sites.
Projects
Statoil C-core ship or iceberg prediction
date: 2017Description:Kaggle competition. Top 13% out of 3343 teams. Built an algorithm that automatically identifies if a remotely sensed target is a ship or iceberg.
Predicting Engagement on Facebook posts
date: 2017Description:Predicted the engagement on Facebook posts given text in the post and the metadata like post creation time with Facebook insights API using Doc2Vec
Skills
- Financial Management
- C
- Java
- MySQL
- C++
- SQL
- HTML
- Python
- Apache Spark
- Hadoop
- MapReduce
- JavaScript
- R
- Hive
- CSS
- AJAX