-> I have a total 12+ years of professional IT experience in architecting and developing applications in Data Engineering, Java and Cloud.
-> Expertise in developing high performance real-time and batch applications using Spark, Java and Scala framework.
-> In depth knowledge of Trade Life cycle and regulatory requirements for Global regulators like, CFTC, SEC, CAN, EMIR, FCA, MAS, JFSA, HKMA, ASIC and Korea.
-> Expertise in Data management and in building ETL and Datalake applications
-> A solid understanding of data structures and algorithms as well as systems design
-> I am a Certified Spark, Kubernetes and AWS Developer
-> Ability to adapt to evolving technology, strong sense of responsibility and accomplishment.
-> I am a quick learner with strong analytical and problem solving skills.
Proficient in Data Modeling, Database Design and Data migration.
Developing batch and realtime applications for Post Trade Technology to submit the regulatory reports to Global regulators like CFTC, SEC, CAN, EMIR, HKMS, MAS, ASIC, JFSA, FCA and Korea and currently working on Refit changes for the regulators.
Strong understanding of Trade Lifecycle and regulatory reporting requirements for derivatives products Rates, FX, EQ, CO and CR.
Contributing to the application modernization to aws public cloud.
We have been using the technologies Spark, Apache Ignite, Java, Spring Boot, Kafka, Oracle, aws, kubernetes.
As a Senior Data Engineer for RIPPLE team, I developed the applications in Spark to migrate the existing applications in MSSQL to Azure and worked with the Data Scientists.
Developed the applications to clean the Adobe clickstream data using the technologies Spark, Azure, Apache Nifi and Kafka which is used by data scientists to build the machine learning models for RIPPLE and MediaCorp.
As part of LTA-DE (Long Term Architecture for Decision Engine) Team, I have worked in building scalable, big-data and open source technologies which support Data Protection, High-Availability, Disaster Recovery and lower TCO. Designed to be reusable and extensible with the goal to use in future Visa products with reasonable efforts.
Developed the bigdata pipelines for Ford to migrate the data from the current legacy systems like oracle, SQL, TERADATA, Mainframe in to Hadoop as part of their future strategic plan.
There were around 111 sources from different source systems like Oracle, SQL Server, MySql, DB2, Teradata, Mainframes files… which were moved from the source systems to the Hadoop File System.
we have Developed Data Ingestion framework using JAVA, Hbase API, SQOOP API, RDBMS API, File System API in JAVA Language. The jobs will be scheduled using the Falcon, Oozie scheudlers. The PII/ SPII information has been secured using the Apache ranger policy.
As part of TESCO Warehouse Management team, I have developed the hadoop applications using the technologies Flume, Hive, AWS, Unix Shell Scripting, Sqoop, Oozie, Core Java, Hue, Cloudera Hadoop Distribution, Map Reduce.
I joined DXC as a fresher and developed the applications for Guardian Life Insurance Client in Mainframe, Java and Oracle for Billing Application.
Spark, Java, Kafka, AWS, python, sql, oracle, Spring boot, Kubernetes, HIVE, cloud, NoSQL, UNIX, ETL, NoSQL