Summary
Overview
Work History
Education
Skills
Websites
Accomplishments
Certification
Affiliations
Languages
Timeline
Generic

Prathmesh Pandit

Summary

Motivated professional offering proficiency in IT field across different domains like Banking, Product, Telco and Government in managing Cloud, DevOps and Big data Projects for internal and external Clients.


Overview

15
15
years of professional experience
1
1
Certification

Work History

Sr. Cloud and Devops, Big data Specialist (Manager)

Thales Group
01.2022 - Current
  • Helping team of 5 to set up Infrastructure on AWS and GCP with terraform
  • Established Pipelines in Gitlab for Infrastructure setup with different branches and repo
  • Implement Data pipeline to increase performance to ingest data to Hadoop fs and process in parquet format with use of spark
  • Implement Hadoop Clusters with Cloudera Hadoop services such as HBase , Hive
  • Perform Spark Jobs tuning in Hadoop YARN
  • Good in Python scripting
  • Experience in creating Splunk index and dashboard on Splunk cloud for different applications
  • Install fluent-bit to push server and application logs to splunk cloud
  • Integrating AWS and GCP with Datadog Monitoring with Terraform
  • Creating Monitoring dashboard on datadog with terraform automation for AWS and GCP
  • Experience in AWS EKS Setup and upgrade with monitoring and scaling EKS Clusters
  • Experience in creating Mongo DB and run queries to fetch data for application teams
  • Experience performing Splunk platform maintenance, capacity management and software updates

Sr.Technical Lead – Devops and Big

HCL Technologies
10.2020 - 12.2021
  • Experience in working with AWS resources like IAM, EC2, EBS, S3, ELB, VPC, Route 53, Auto Scaling, Cloud Watch, Cloud Front, Cloud Trail, Red Shift, SQS, and SNS and experienced in Cloud automation
  • Worked on cloud provisioning tools such as Terraform and CloudFormation
  • Creating and Managing CI/CD Pipeline with Atlassian bamboo for continuous integration and for end-to-end automation of builds and deployments
  • Experience in setting up Kubernetes Cluster in Rancher and deployed workloads
  • Working with IT infrastructure monitoring tools like CloudWatch, Zabbix, Splunk
  • Build, deploy and management of large-scale Hadoop based data Infrastructure
  • Experienced in installing, configuring, and administrating Hadoop cluster of major distributions
  • Good knowledge in Kerberos security and experience in performance tuning and benchmarking of cluster
  • Creating ETL Pipelines batch and real-time
  • Maintain ELK Stack and use Kibana for log management

Sr. Devops and Data Engineer

DataSpark
10.2018 - 09.2020
  • Good knowledge in AWS essential services (EC2, S3, IAM, EBS, RDS, EC2 Security Groups)
  • Good knowledge in AWS networking (Internet gateways, VPC, subnets, Route53)
  • Infrastructure provisioning to AWS with terraform
  • Implementation of CI/CD Pipeline by bamboo
  • Create Ansible playbooks and modules for automation
  • Administer Ansible infrastructure, perform maintenance and configuration and provide SME level support
  • Implemented production ready, load balanced, highly-available, fault-tolerant Kubernetes infrastructure.
  • Deployment of Cloud service including Jenkins and Docker using Terraform
  • Design and Implement Cloudera Hadoop Clusters and Configure services like Kafka, Spark, Redis , Hive
  • Maintaining Hadoop cluster, adding and removing nodes by Cloudera Manager , configure namenode availability and keeping track of running Hadoop jobs
  • Design and developed complex data pipelines and maintained data quality to support business
  • Performance tuning of Spark jobs and Cloudera Resource level tuning
  • Import dataset in R and visualize data
  • Cloudera reporting metrics and utilization reports for all nodes

Senior System and devops Engineer

HCL INSYS PTE LTD
07.2013 - 09.2018
  • Managing Daily CI/CD pipeline of Jenkins
  • Worked on Docker containers and kubernates orchestration
  • Worked on configured management tool Puppet and ansible to perform automation
  • Continuous monitoring by zabbix and splunk
  • Managing System Performance Issues and help other teams to troubleshoot
  • Source code management by Git and GitHub
  • Deployment using Oracle WebLogic and Oracle SOA
  • Managing Cloud infrastructure in AWS and perform administration activities
  • Understanding of regulatory audit requirements
  • Worked on DR (Disaster Recovery) Activities and user testing during DR
  • Managing BAU team which takes care of daily UNIX and Middleware activities

System Engineer

Optimum Solutions
11.2010 - 06.2013
  • Manage On-Premise Solaris and Linux Servers and perform installation and upgradation
  • Managing and administrating Zones on solaris10 servers
  • Installation of IBM WebSphere and Deployment
  • Installation of IBM HTTP Server and administration
  • Managing integration team and assigning project base tasks to each team members
  • Closely follow-up with the Customer regarding deadline of completing the projects

Unix and Storage Administrator

Amdocs DVCI
07.2008 - 10.2010
  • Installed Solaris OS 8,9,10 on Solaris M4000, Netra servers and also worked on new concepts like Solaris zones and ZFS file systems creation
  • Installed AIX OS 5L 5.3 and AIX 6.1 on AIX P5 and P6 Servers
  • Created NIM Master servers and installed OS via NIM Servers
  • Worked on incident and Problem management as per SLA defined by Amdocs
  • Worked on VERITAS Volume manager for Disk group and file systems creations
  • As a Part of Amdocs DVCI (Development Centre of India) UNIX Team, worked on HP Data protector backup to handle all DVCI Backups
  • LUN Allocations from EMC VMAX, DMAX to Datacentre servers as per customer requirement
  • Worked on setting WWN Zoning and Fibre channel zoning
  • Worked on handling data centre maintenance activities and other tasks

Education

Bachelor of Science - Mathematics

University Of Mumbai
Bhavans College
05.2003

Skills

  • AWS
  • GCP
  • Terraform
  • Kubernetes
  • Gitlab CI/CD
  • Mongo DB
  • Splunk
  • Datadog
  • Hadoop
  • Spark
  • Python Scripting

Accomplishments

  • AWS | GCP | Azure
  • Cloudera Hadoop | Kafka | Spark | Hive
  • Bamboo | Jenkins | Ansible | Docker | Kubernetes | Gitlab| Rancher | Terraform | Nexus
  • MySQL | MongoDB
  • BASH | Python | R
  • Grafana | Zabbix | Splunk
  • JIRA | Confluence
  • Linux | Solaris 11 | Windows 10 | Windows Server 2016
  • Checkpoint Firewall | Fortinet

Certification

AWS Certified Solution architect Cloudera Hadoop certified administrator. Google Cloud Architect. Technology Summary

Affiliations

  • Toastmasters
  • Rotary International

Languages

English
Bilingual or Proficient (C2)
Marathi
Bilingual or Proficient (C2)
Hindi
Upper intermediate (B2)

Timeline

Sr. Cloud and Devops, Big data Specialist (Manager)

Thales Group
01.2022 - Current

Sr.Technical Lead – Devops and Big

HCL Technologies
10.2020 - 12.2021

Sr. Devops and Data Engineer

DataSpark
10.2018 - 09.2020

Senior System and devops Engineer

HCL INSYS PTE LTD
07.2013 - 09.2018

System Engineer

Optimum Solutions
11.2010 - 06.2013

Unix and Storage Administrator

Amdocs DVCI
07.2008 - 10.2010

Bachelor of Science - Mathematics

University Of Mumbai
Prathmesh Pandit