Category
stringclasses 25
values | Resume
stringlengths 134
14.6k
|
---|---|
PMO | CORE COMPETENCIES ⢠Maintain processes to ensure project management documentation, reports and plans are relevant, accurate and complete ⢠Report automation, Dashboard preparation and sharing feedbacks basis on performance of Project Manager ⢠Forecasting data regarding future risks, Project changes and updating the delivery team on timely basis ⢠Good understanding of project management lifecycle ⢠Proven excellence in Risk Management and control ⢠Good understanding of Software Development Lifecycle (SDLC) ⢠Ability to synthesize qualitative and quantitative data quickly and draw meaningful insights ⢠Knowledge of Programme/Project Management methodologies with full project reporting and governance ⢠Ability to work with different cross-functional stakeholders to establish and ensure a reliable and productive working relationship ⢠Strong time management and organizational skills ⢠Multitasking skills and ability to meet deadlines COMPUTER SKILLS AND CERTIFICATION ⢠Advance knowledge in MS office 2013 and Macros. SKILLS ⢠Strategic thinking and decision making ability ⢠Sound Analytical skills ⢠Multi-tasking skills in fast paced environment. ⢠Leadership and Inter Personal Skills. ⢠Strong information management ability, particularly MS excel extraction, formulae, pivots and graphs. Education Details
January 2005 Bachelor of Business Administration Business Administration Pune, Maharashtra Modern College
HSC Pune, Maharashtra S.S.P.M.S College
SSC Pune, Maharashtra Saints High School
PMO
Having an exp of 6 years experience in Project Management in IT. Expertise in PMO, Team handling, Quality Analyst. Proficient in Data Analyzing tools and techniques.
Skill Details
DOCUMENTATION- Exprience - 47 months
GOVERNANCE- Exprience - 19 months
EXCEL- Exprience - 6 months
FORECASTING- Exprience - 6 months
MS EXCEL- Exprience - 6 monthsCompany Details
company - Capita India Pvt ltd
description - Pune
Key Result Areas
Responsible for successful transition of knowledge, system and operating capabilities for Prudential, Multiclient, Pheonix & Royal London.
⢠Travelled Onsite (Glasgow) and being part with UK team to understand the transition PMO work process and execute successfully at Offshore.
⢠Successfully transitioned Work order Management, Governance and Reporting from UK.
⢠Lead a team of 6 members and follow up on the development of new Ways of Working & documentation processes.
⢠Manage internal and external stakeholder engagement, collaboration of teams, and global PMOs network ⢠Helps achieve robust operations with all the resources and infrastructure to execute steady state operations.
company - Saviant Technologies
description - for Multiple Projects
⢠Established a PMO from scratch and provided seasoned leadership to the technical operations staff ⢠Defined and implemented work priority management and resource management processes ⢠Established a supportive environment that allowed employees to grow and provide imaginative solutions to complex client need ⢠Track and monitor financial performance of the program. Report financials for actual to budgeted comparison for labor hours and dollars, operating costs and capital costs. Secure funding approvals for changes in scope ⢠Monitor program risks through an on-going process of identifying, assessing, tracking, developing and executing risk mitigation strategies ⢠Reviewed project documentation and document lessons learned and provide recommendations to mitigate them in future projects.
⢠risk identification, mitigation strategy, issue escalation, client communication, project timeline, and resource management
company - Infosys
description - Pune
Key Result Areas
Responsible for:- ⢠Resource management, Budgeting, Billing.
⢠Responsible for preparing and sharing different reports with Delivery Managers, Project Managers, Quality team ⢠Automation of reports for entire unit ⢠Interpret data, analyze results using statistical techniques and provide ongoing reports.
⢠Preparing case diagrams & activity diagrams for various scenarios.
⢠Collate data, study patterns and Conduct brainstorming sessions to identify outliers.
⢠Review and approve project documentation.
⢠Assist in identification of risks in the project and setting up of mitigation plan of the risk by reviewing dashboards and reports.
⢠Customer feedback information and analysis.
⢠Reviews and validate the inputs from Project Mangers regarding Dashboards and PPT's ⢠Supporting TL by training people on process/domain as a part of the growth plan SLA compliance.
company - Capita India Pvt ltd
description - Pune
Key Result Areas
Audits ⢠Reviews and validate the inputs from Managers regarding Dashboards and PPT's ⢠Auditing work done by onshore agents and simultaneously auditing work done for my old team and their reporting part as well.
⢠Assisting reporting manager in business transformation leadership skills with proven ability to influence and collaborate across all levels of the organization.
⢠Helping line managers to solve specific audit problems, either on a one-to-one basis or in groups.
Reporting ⢠Preparing weekly / monthly / quarterly / yearly MIS -Variance report, Performance report, Feedback analysis, Task activities report, publish relevant business Dashboards, Projects audit report. |
PMO | AREA OF EXPERTISE (PROFILE) Around 10 plus years' proven experience with best global brand Wipro with below expertise:- ⢠PMO ⢠ITIL Management ⢠Process Improvements ⢠Project Process Audits ⢠Planning, Scheduling, Effort/Issue/Risk Tracking ⢠Risk & Issue Management ⢠SLA Management ⢠Workforce (staffing) Resource Management. ⢠Transition ⢠Operations management SKILLS Project Management Tools: CA Clarity, Visio and Office, ITIL -Incident management, Recruitment and workforce management Technical: SAP- HR, MRS, CPRO, Confluence, Microsoft Office, Word, PowerPoint.Excellent knowledge & hands on experience in advanced MS Excel (Knowledge on MS Project, Sharepoint Reporting & Ticket Tool: Xtraction, CA Service Desk, I-Tracker, Education Details
MBA HR and Finance Bengaluru, Karnataka RKIMS College
Senior Executive PMO
Senior Executive PMO Consultant
Skill Details
OPERATIONS- Exprience - 125 months
STAFFING- Exprience - 125 months
HR- Exprience - 79 months
PMO- Exprience - 84 monthsCompany Details
company - Ensono LLP
description - Roles &Responsiblites
Â
ÃÂ Responsible for creation of Structured reports and present the same as to Senior Deliery management as per the business requirements.
ÃÂ Design and draft various reports as per the business requirements.
ÃÂ Responsible for creation of MOM, chasing people and getting the SLA driven on time by achieving the targets and results on time.
ÃÂ Assist the Project managers in creating the RRâs Deputation, invoicings, billing activites.
ÃÂ Maintaining Clarity and Sharepoint data for service delivery management
ÃÂ Perform customer invocing at the direction of the CEM and SDM.
ÃÂ Weekly preparation of SLA and KPI data based on the manual tracker & sharing with Client & senior management.
ÃÂ Project implementation management, invoicing and billing management, and participate in establishing clientâs contractual documentation
ÃÂ Experience in various delivery models like Managed Services, Fixed Price, T&M, SLA based Risk and Penalty is required.
ÃÂ Manage the SLA targets and save penalty towards customers . Drive SLA calls with 80 plus customers with multiple towers.
ÃÂ SPOC for time on floor analysis (TOFA) report & highlighting the employee tailgating data to high level management
ÃÂ Ensure for any compliance related issue and floor maintenance
àEnsure asall joining formalities and on boarding activities for new employees.
ÃÂ Identify and drivekey metrics like Billing efficiency, Resource Utilization.
ÃÂ Maintain the project library, filing, recording and reporting systems.
ÃÂ Monitor project progress, risks, roadblocks, and opportunities and manage communications to stakeholders.
ÃÂ Develop Flow charts /SOPâs ad maintain the process changes database& monitor the severity calls.
ÃÂ Prepare Monthly reports Operational report, Capacity/utilization report, Timesheet report, SLA compliancereport. Quarterly report Operational report (quarter trends)
Internal report Allowances, Billing reports, Repository maintenance of documents.Create project/ sub-project plans & monitor progress against schedule, Maintain risk & issue logs
ÃÂ Actively participate in the project management communities
ÃÂ Responsible for Project Cost, Schedule, Budget, Revenue& Milestone Progress.
company - Wipro Technology
description - Roles &Responsiblites
Â
ÃÂ Responsible for creation of Structured reports and present the same as to Senior Deliery management as per the business requirements.
ÃÂ Design and draft various reports as per the business requirements.
ÃÂ Responsible for creation of MOM, chasing people and getting the SLA driven on time by achieving the targets and results on time.
ÃÂ Assist the Project managers in creating the RRâs Deputation, invoicings, billing activites.
ÃÂ Maintaining Clarity and Sharepoint data for service delivery management
ÃÂ Perform customer invocing at the direction of the CEM and SDM.
ÃÂ Weekly preparation of SLA and KPI data based on the manual tracker & sharing with Client & senior management.
ÃÂ Project implementation management, invoicing and billing management, and participate in establishing clientâs contractual documentation
ÃÂ Experience in various delivery models like Managed Services, Fixed Price, T&M, SLA based Risk and Penalty is required.
ÃÂ Manage the SLA targets and save penalty towards customers . Drive SLA calls with 80 plus customers with multiple towers.
ÃÂ SPOC for time on floor analysis (TOFA) report & highlighting the employee tailgating data to high level management
ÃÂ Ensure for any compliance related issue and floor maintenance
àEnsure asall joining formalities and on boarding activities for new employees.
ÃÂ Identify and drivekey metrics like Billing efficiency, Resource Utilization.
ÃÂ Maintain the project library, filing, recording and reporting systems.
ÃÂ Monitor project progress, risks, roadblocks, and opportunities and manage communications to stakeholders.
ÃÂ Develop Flow charts /SOPâs ad maintain the process changes database& monitor the severity calls.
ÃÂ Prepare Monthly reports Operational report, Capacity/utilization report, Timesheet report, SLA compliancereport. Quarterly report Operational report (quarter trends)
Internal report Allowances, Billing reports, Repository maintenance of documents.Create project/ sub-project plans & monitor progress against schedule, Maintain risk & issue logs
ÃÂ Actively participate in the project management communities
ÃÂ Responsible for Project Cost, Schedule, Budget, Revenue& Milestone Progress.
company - Wipro InfoTech
description - Responsibilities
⢠Monitor and manage the headcount actual Vs plan for the region to maintain the headcount ratio with the revenue.
⢠Maintain and monitor the correct tagging in SAP (Project tagging, supervisor tagging, org unit and cost center) for the region so that the financials are maintained properly.
⢠Responsible in providing the exact and accurate headcount report for GM calculation.
⢠Responsible in managing the bench management and deploy the resource.
⢠Responsible in managing and driving tenure management for the eligible employee and deploy them according to their aspiration and business need.
⢠Responsible in Hiring and maintaining the Rookie Ratio for the location and actively track their training and deploy them.
⢠Analyze past volume and staffing patterns and will implement the actions based on the forecast provided so that the resource crunch can be addressed and the make sure the availability of the resources on time for go live.
⢠Validate the head count plan for the project and work with Stake holders (Service Delivery Managers) in optimizing the resources.
⢠Ensure all required WFM data is tracked and trended on a continuous basis by the NLD team.
⢠Identify the resource that had completed tenure with the project and plan their training with the help of training team and elevate them to higher roles and back fill the same with the ROOKIE'S (TRB, TE, WIMS, and SIMS)
⢠Interface with Service Delivery Managers/Director as needed for escalation on service impacting issues due to resource availability.
⢠Coordinates with stake holders of Operations to interface with client and handle account management issues and add resources as per the requirement.
⢠Manages the staff schedules and responsibilities of Workforce Management team for the Region/BU.
⢠Prepare daily/weekly/monthly reports and distribute to the Management team.
⢠Manages staffing ratios and seat utilization/optimization to ensure Project goals are met. Builds effective working relationships with internal departments.
⢠Take care of special projects (PWD) and Rookie hiring model, Training, deployment.
PERSONAL DETAIL
DOB: 21/03/1986
PAN: AWVPB7123N
Passport: J1409038
Linguistic Ability: English, Hindi, Marathi, Kannada and Konkani
Location: Pune, India
Marital Status: Married |
PMO | Skills Exceptional communication and networking skills Successful working in a team environment, as well as independently Ability to work under pressure and multi-task Strategies & Campaigns Corporate Communications MIS Reporting & Documentation Training & Development Sales Support & Back Office Operations New Process Development & Launch Handling customer escalationsEducation Details
BACHELOR OF BUSINESS ADMINISTRATION BUSINESS ADMINISTRATION ICFAI Business School
Integrated Institute Of Management &Technology
HIGHER SECONDARY SCHOOL, B.I.S.S School
Delhi, Delhi SENIOR SECONDARY SCHOOL, Delhi Public School
Senior Manager - PMO
Skill Details
TRAINING- Exprience - 30 months
DOCUMENTATION- Exprience - 16 months
OPERATIONS- Exprience - 16 months
SALES- Exprience - 8 months
CORPORATE COMMUNICATIONS- Exprience - 6 monthsCompany Details
company -
description - Review and understand existing business processes to identify functional requirements to eliminate
waste, improve controllership and deliver flexibility
Identify processes for re-design, prototype potential solutions, calculate trade-offs, costs, and suggest a
recommended course of action by identifying modifications to the new/existing process
Project Management of new requirements and opportunities for applying efficient and effective solutions
Responsible for delivering process reengineering projects across processes by closely working with the relevant businesses and operations units.
Responsible for documentation to train all stakeholders on any changes
company -
description - Responsible for defining scope of project in accordance with the stakeholders, internal teams and senior
management team.
Prepare project charter with defined timelines for project related activities.
Preparation of Business Requirement Document (BRD), closing Understanding Document (UD) with development team, UAT completion and deployment.
Preparation of training documents, SLAs, SOPs etc. as required.
Conduct training for impacted teams to ensure smooth transition.
company - TELEPERFORMANCE INDIA
description - Driving sales through call center and achieve target with overall responsibility of exploring selling opportunities by understanding customer preferences and requirements.
Conceptualizing and implementing sales promotional activities as a part of pilot batch for new company launch.
Training new joiners through the process of call barging.
Interaction with client to understand requirements and expectations.
Handling call quality sessions with the client.
Handling adhoc requirements from client as well as senior management and delivering timely resolution for the same.
MASTER OF BUSINESS ADMINISTRATION |
PMO | CORE COMPETENCIES ⢠Maintain processes to ensure project management documentation, reports and plans are relevant, accurate and complete ⢠Report automation, Dashboard preparation and sharing feedbacks basis on performance of Project Manager ⢠Forecasting data regarding future risks, Project changes and updating the delivery team on timely basis ⢠Good understanding of project management lifecycle ⢠Proven excellence in Risk Management and control ⢠Good understanding of Software Development Lifecycle (SDLC) ⢠Ability to synthesize qualitative and quantitative data quickly and draw meaningful insights ⢠Knowledge of Programme/Project Management methodologies with full project reporting and governance ⢠Ability to work with different cross-functional stakeholders to establish and ensure a reliable and productive working relationship ⢠Strong time management and organizational skills ⢠Multitasking skills and ability to meet deadlines COMPUTER SKILLS AND CERTIFICATION ⢠Advance knowledge in MS office 2013 and Macros. SKILLS ⢠Strategic thinking and decision making ability ⢠Sound Analytical skills ⢠Multi-tasking skills in fast paced environment. ⢠Leadership and Inter Personal Skills. ⢠Strong information management ability, particularly MS excel extraction, formulae, pivots and graphs. Education Details
January 2005 Bachelor of Business Administration Business Administration Pune, Maharashtra Modern College
HSC Pune, Maharashtra S.S.P.M.S College
SSC Pune, Maharashtra Saints High School
PMO
Having an exp of 6 years experience in Project Management in IT. Expertise in PMO, Team handling, Quality Analyst. Proficient in Data Analyzing tools and techniques.
Skill Details
DOCUMENTATION- Exprience - 47 months
GOVERNANCE- Exprience - 19 months
EXCEL- Exprience - 6 months
FORECASTING- Exprience - 6 months
MS EXCEL- Exprience - 6 monthsCompany Details
company - Capita India Pvt ltd
description - Pune
Key Result Areas
Responsible for successful transition of knowledge, system and operating capabilities for Prudential, Multiclient, Pheonix & Royal London.
⢠Travelled Onsite (Glasgow) and being part with UK team to understand the transition PMO work process and execute successfully at Offshore.
⢠Successfully transitioned Work order Management, Governance and Reporting from UK.
⢠Lead a team of 6 members and follow up on the development of new Ways of Working & documentation processes.
⢠Manage internal and external stakeholder engagement, collaboration of teams, and global PMOs network ⢠Helps achieve robust operations with all the resources and infrastructure to execute steady state operations.
company - Saviant Technologies
description - for Multiple Projects
⢠Established a PMO from scratch and provided seasoned leadership to the technical operations staff ⢠Defined and implemented work priority management and resource management processes ⢠Established a supportive environment that allowed employees to grow and provide imaginative solutions to complex client need ⢠Track and monitor financial performance of the program. Report financials for actual to budgeted comparison for labor hours and dollars, operating costs and capital costs. Secure funding approvals for changes in scope ⢠Monitor program risks through an on-going process of identifying, assessing, tracking, developing and executing risk mitigation strategies ⢠Reviewed project documentation and document lessons learned and provide recommendations to mitigate them in future projects.
⢠risk identification, mitigation strategy, issue escalation, client communication, project timeline, and resource management
company - Infosys
description - Pune
Key Result Areas
Responsible for:- ⢠Resource management, Budgeting, Billing.
⢠Responsible for preparing and sharing different reports with Delivery Managers, Project Managers, Quality team ⢠Automation of reports for entire unit ⢠Interpret data, analyze results using statistical techniques and provide ongoing reports.
⢠Preparing case diagrams & activity diagrams for various scenarios.
⢠Collate data, study patterns and Conduct brainstorming sessions to identify outliers.
⢠Review and approve project documentation.
⢠Assist in identification of risks in the project and setting up of mitigation plan of the risk by reviewing dashboards and reports.
⢠Customer feedback information and analysis.
⢠Reviews and validate the inputs from Project Mangers regarding Dashboards and PPT's ⢠Supporting TL by training people on process/domain as a part of the growth plan SLA compliance.
company - Capita India Pvt ltd
description - Pune
Key Result Areas
Audits ⢠Reviews and validate the inputs from Managers regarding Dashboards and PPT's ⢠Auditing work done by onshore agents and simultaneously auditing work done for my old team and their reporting part as well.
⢠Assisting reporting manager in business transformation leadership skills with proven ability to influence and collaborate across all levels of the organization.
⢠Helping line managers to solve specific audit problems, either on a one-to-one basis or in groups.
Reporting ⢠Preparing weekly / monthly / quarterly / yearly MIS -Variance report, Performance report, Feedback analysis, Task activities report, publish relevant business Dashboards, Projects audit report. |
PMO | AREA OF EXPERTISE (PROFILE) Around 10 plus years' proven experience with best global brand Wipro with below expertise:- ⢠PMO ⢠ITIL Management ⢠Process Improvements ⢠Project Process Audits ⢠Planning, Scheduling, Effort/Issue/Risk Tracking ⢠Risk & Issue Management ⢠SLA Management ⢠Workforce (staffing) Resource Management. ⢠Transition ⢠Operations management SKILLS Project Management Tools: CA Clarity, Visio and Office, ITIL -Incident management, Recruitment and workforce management Technical: SAP- HR, MRS, CPRO, Confluence, Microsoft Office, Word, PowerPoint.Excellent knowledge & hands on experience in advanced MS Excel (Knowledge on MS Project, Sharepoint Reporting & Ticket Tool: Xtraction, CA Service Desk, I-Tracker, Education Details
MBA HR and Finance Bengaluru, Karnataka RKIMS College
Senior Executive PMO
Senior Executive PMO Consultant
Skill Details
OPERATIONS- Exprience - 125 months
STAFFING- Exprience - 125 months
HR- Exprience - 79 months
PMO- Exprience - 84 monthsCompany Details
company - Ensono LLP
description - Roles &Responsiblites
Â
ÃÂ Responsible for creation of Structured reports and present the same as to Senior Deliery management as per the business requirements.
ÃÂ Design and draft various reports as per the business requirements.
ÃÂ Responsible for creation of MOM, chasing people and getting the SLA driven on time by achieving the targets and results on time.
ÃÂ Assist the Project managers in creating the RRâs Deputation, invoicings, billing activites.
ÃÂ Maintaining Clarity and Sharepoint data for service delivery management
ÃÂ Perform customer invocing at the direction of the CEM and SDM.
ÃÂ Weekly preparation of SLA and KPI data based on the manual tracker & sharing with Client & senior management.
ÃÂ Project implementation management, invoicing and billing management, and participate in establishing clientâs contractual documentation
ÃÂ Experience in various delivery models like Managed Services, Fixed Price, T&M, SLA based Risk and Penalty is required.
ÃÂ Manage the SLA targets and save penalty towards customers . Drive SLA calls with 80 plus customers with multiple towers.
ÃÂ SPOC for time on floor analysis (TOFA) report & highlighting the employee tailgating data to high level management
ÃÂ Ensure for any compliance related issue and floor maintenance
àEnsure asall joining formalities and on boarding activities for new employees.
ÃÂ Identify and drivekey metrics like Billing efficiency, Resource Utilization.
ÃÂ Maintain the project library, filing, recording and reporting systems.
ÃÂ Monitor project progress, risks, roadblocks, and opportunities and manage communications to stakeholders.
ÃÂ Develop Flow charts /SOPâs ad maintain the process changes database& monitor the severity calls.
ÃÂ Prepare Monthly reports Operational report, Capacity/utilization report, Timesheet report, SLA compliancereport. Quarterly report Operational report (quarter trends)
Internal report Allowances, Billing reports, Repository maintenance of documents.Create project/ sub-project plans & monitor progress against schedule, Maintain risk & issue logs
ÃÂ Actively participate in the project management communities
ÃÂ Responsible for Project Cost, Schedule, Budget, Revenue& Milestone Progress.
company - Wipro Technology
description - Roles &Responsiblites
Â
ÃÂ Responsible for creation of Structured reports and present the same as to Senior Deliery management as per the business requirements.
ÃÂ Design and draft various reports as per the business requirements.
ÃÂ Responsible for creation of MOM, chasing people and getting the SLA driven on time by achieving the targets and results on time.
ÃÂ Assist the Project managers in creating the RRâs Deputation, invoicings, billing activites.
ÃÂ Maintaining Clarity and Sharepoint data for service delivery management
ÃÂ Perform customer invocing at the direction of the CEM and SDM.
ÃÂ Weekly preparation of SLA and KPI data based on the manual tracker & sharing with Client & senior management.
ÃÂ Project implementation management, invoicing and billing management, and participate in establishing clientâs contractual documentation
ÃÂ Experience in various delivery models like Managed Services, Fixed Price, T&M, SLA based Risk and Penalty is required.
ÃÂ Manage the SLA targets and save penalty towards customers . Drive SLA calls with 80 plus customers with multiple towers.
ÃÂ SPOC for time on floor analysis (TOFA) report & highlighting the employee tailgating data to high level management
ÃÂ Ensure for any compliance related issue and floor maintenance
àEnsure asall joining formalities and on boarding activities for new employees.
ÃÂ Identify and drivekey metrics like Billing efficiency, Resource Utilization.
ÃÂ Maintain the project library, filing, recording and reporting systems.
ÃÂ Monitor project progress, risks, roadblocks, and opportunities and manage communications to stakeholders.
ÃÂ Develop Flow charts /SOPâs ad maintain the process changes database& monitor the severity calls.
ÃÂ Prepare Monthly reports Operational report, Capacity/utilization report, Timesheet report, SLA compliancereport. Quarterly report Operational report (quarter trends)
Internal report Allowances, Billing reports, Repository maintenance of documents.Create project/ sub-project plans & monitor progress against schedule, Maintain risk & issue logs
ÃÂ Actively participate in the project management communities
ÃÂ Responsible for Project Cost, Schedule, Budget, Revenue& Milestone Progress.
company - Wipro InfoTech
description - Responsibilities
⢠Monitor and manage the headcount actual Vs plan for the region to maintain the headcount ratio with the revenue.
⢠Maintain and monitor the correct tagging in SAP (Project tagging, supervisor tagging, org unit and cost center) for the region so that the financials are maintained properly.
⢠Responsible in providing the exact and accurate headcount report for GM calculation.
⢠Responsible in managing the bench management and deploy the resource.
⢠Responsible in managing and driving tenure management for the eligible employee and deploy them according to their aspiration and business need.
⢠Responsible in Hiring and maintaining the Rookie Ratio for the location and actively track their training and deploy them.
⢠Analyze past volume and staffing patterns and will implement the actions based on the forecast provided so that the resource crunch can be addressed and the make sure the availability of the resources on time for go live.
⢠Validate the head count plan for the project and work with Stake holders (Service Delivery Managers) in optimizing the resources.
⢠Ensure all required WFM data is tracked and trended on a continuous basis by the NLD team.
⢠Identify the resource that had completed tenure with the project and plan their training with the help of training team and elevate them to higher roles and back fill the same with the ROOKIE'S (TRB, TE, WIMS, and SIMS)
⢠Interface with Service Delivery Managers/Director as needed for escalation on service impacting issues due to resource availability.
⢠Coordinates with stake holders of Operations to interface with client and handle account management issues and add resources as per the requirement.
⢠Manages the staff schedules and responsibilities of Workforce Management team for the Region/BU.
⢠Prepare daily/weekly/monthly reports and distribute to the Management team.
⢠Manages staffing ratios and seat utilization/optimization to ensure Project goals are met. Builds effective working relationships with internal departments.
⢠Take care of special projects (PWD) and Rookie hiring model, Training, deployment.
PERSONAL DETAIL
DOB: 21/03/1986
PAN: AWVPB7123N
Passport: J1409038
Linguistic Ability: English, Hindi, Marathi, Kannada and Konkani
Location: Pune, India
Marital Status: Married |
PMO | Skills Exceptional communication and networking skills Successful working in a team environment, as well as independently Ability to work under pressure and multi-task Strategies & Campaigns Corporate Communications MIS Reporting & Documentation Training & Development Sales Support & Back Office Operations New Process Development & Launch Handling customer escalationsEducation Details
BACHELOR OF BUSINESS ADMINISTRATION BUSINESS ADMINISTRATION ICFAI Business School
Integrated Institute Of Management &Technology
HIGHER SECONDARY SCHOOL, B.I.S.S School
Delhi, Delhi SENIOR SECONDARY SCHOOL, Delhi Public School
Senior Manager - PMO
Skill Details
TRAINING- Exprience - 30 months
DOCUMENTATION- Exprience - 16 months
OPERATIONS- Exprience - 16 months
SALES- Exprience - 8 months
CORPORATE COMMUNICATIONS- Exprience - 6 monthsCompany Details
company -
description - Review and understand existing business processes to identify functional requirements to eliminate
waste, improve controllership and deliver flexibility
Identify processes for re-design, prototype potential solutions, calculate trade-offs, costs, and suggest a
recommended course of action by identifying modifications to the new/existing process
Project Management of new requirements and opportunities for applying efficient and effective solutions
Responsible for delivering process reengineering projects across processes by closely working with the relevant businesses and operations units.
Responsible for documentation to train all stakeholders on any changes
company -
description - Responsible for defining scope of project in accordance with the stakeholders, internal teams and senior
management team.
Prepare project charter with defined timelines for project related activities.
Preparation of Business Requirement Document (BRD), closing Understanding Document (UD) with development team, UAT completion and deployment.
Preparation of training documents, SLAs, SOPs etc. as required.
Conduct training for impacted teams to ensure smooth transition.
company - TELEPERFORMANCE INDIA
description - Driving sales through call center and achieve target with overall responsibility of exploring selling opportunities by understanding customer preferences and requirements.
Conceptualizing and implementing sales promotional activities as a part of pilot batch for new company launch.
Training new joiners through the process of call barging.
Interaction with client to understand requirements and expectations.
Handling call quality sessions with the client.
Handling adhoc requirements from client as well as senior management and delivering timely resolution for the same.
MASTER OF BUSINESS ADMINISTRATION |
PMO | CORE COMPETENCIES ⢠Maintain processes to ensure project management documentation, reports and plans are relevant, accurate and complete ⢠Report automation, Dashboard preparation and sharing feedbacks basis on performance of Project Manager ⢠Forecasting data regarding future risks, Project changes and updating the delivery team on timely basis ⢠Good understanding of project management lifecycle ⢠Proven excellence in Risk Management and control ⢠Good understanding of Software Development Lifecycle (SDLC) ⢠Ability to synthesize qualitative and quantitative data quickly and draw meaningful insights ⢠Knowledge of Programme/Project Management methodologies with full project reporting and governance ⢠Ability to work with different cross-functional stakeholders to establish and ensure a reliable and productive working relationship ⢠Strong time management and organizational skills ⢠Multitasking skills and ability to meet deadlines COMPUTER SKILLS AND CERTIFICATION ⢠Advance knowledge in MS office 2013 and Macros. SKILLS ⢠Strategic thinking and decision making ability ⢠Sound Analytical skills ⢠Multi-tasking skills in fast paced environment. ⢠Leadership and Inter Personal Skills. ⢠Strong information management ability, particularly MS excel extraction, formulae, pivots and graphs. Education Details
January 2005 Bachelor of Business Administration Business Administration Pune, Maharashtra Modern College
HSC Pune, Maharashtra S.S.P.M.S College
SSC Pune, Maharashtra Saints High School
PMO
Having an exp of 6 years experience in Project Management in IT. Expertise in PMO, Team handling, Quality Analyst. Proficient in Data Analyzing tools and techniques.
Skill Details
DOCUMENTATION- Exprience - 47 months
GOVERNANCE- Exprience - 19 months
EXCEL- Exprience - 6 months
FORECASTING- Exprience - 6 months
MS EXCEL- Exprience - 6 monthsCompany Details
company - Capita India Pvt ltd
description - Pune
Key Result Areas
Responsible for successful transition of knowledge, system and operating capabilities for Prudential, Multiclient, Pheonix & Royal London.
⢠Travelled Onsite (Glasgow) and being part with UK team to understand the transition PMO work process and execute successfully at Offshore.
⢠Successfully transitioned Work order Management, Governance and Reporting from UK.
⢠Lead a team of 6 members and follow up on the development of new Ways of Working & documentation processes.
⢠Manage internal and external stakeholder engagement, collaboration of teams, and global PMOs network ⢠Helps achieve robust operations with all the resources and infrastructure to execute steady state operations.
company - Saviant Technologies
description - for Multiple Projects
⢠Established a PMO from scratch and provided seasoned leadership to the technical operations staff ⢠Defined and implemented work priority management and resource management processes ⢠Established a supportive environment that allowed employees to grow and provide imaginative solutions to complex client need ⢠Track and monitor financial performance of the program. Report financials for actual to budgeted comparison for labor hours and dollars, operating costs and capital costs. Secure funding approvals for changes in scope ⢠Monitor program risks through an on-going process of identifying, assessing, tracking, developing and executing risk mitigation strategies ⢠Reviewed project documentation and document lessons learned and provide recommendations to mitigate them in future projects.
⢠risk identification, mitigation strategy, issue escalation, client communication, project timeline, and resource management
company - Infosys
description - Pune
Key Result Areas
Responsible for:- ⢠Resource management, Budgeting, Billing.
⢠Responsible for preparing and sharing different reports with Delivery Managers, Project Managers, Quality team ⢠Automation of reports for entire unit ⢠Interpret data, analyze results using statistical techniques and provide ongoing reports.
⢠Preparing case diagrams & activity diagrams for various scenarios.
⢠Collate data, study patterns and Conduct brainstorming sessions to identify outliers.
⢠Review and approve project documentation.
⢠Assist in identification of risks in the project and setting up of mitigation plan of the risk by reviewing dashboards and reports.
⢠Customer feedback information and analysis.
⢠Reviews and validate the inputs from Project Mangers regarding Dashboards and PPT's ⢠Supporting TL by training people on process/domain as a part of the growth plan SLA compliance.
company - Capita India Pvt ltd
description - Pune
Key Result Areas
Audits ⢠Reviews and validate the inputs from Managers regarding Dashboards and PPT's ⢠Auditing work done by onshore agents and simultaneously auditing work done for my old team and their reporting part as well.
⢠Assisting reporting manager in business transformation leadership skills with proven ability to influence and collaborate across all levels of the organization.
⢠Helping line managers to solve specific audit problems, either on a one-to-one basis or in groups.
Reporting ⢠Preparing weekly / monthly / quarterly / yearly MIS -Variance report, Performance report, Feedback analysis, Task activities report, publish relevant business Dashboards, Projects audit report. |
PMO | AREA OF EXPERTISE (PROFILE) Around 10 plus years' proven experience with best global brand Wipro with below expertise:- ⢠PMO ⢠ITIL Management ⢠Process Improvements ⢠Project Process Audits ⢠Planning, Scheduling, Effort/Issue/Risk Tracking ⢠Risk & Issue Management ⢠SLA Management ⢠Workforce (staffing) Resource Management. ⢠Transition ⢠Operations management SKILLS Project Management Tools: CA Clarity, Visio and Office, ITIL -Incident management, Recruitment and workforce management Technical: SAP- HR, MRS, CPRO, Confluence, Microsoft Office, Word, PowerPoint.Excellent knowledge & hands on experience in advanced MS Excel (Knowledge on MS Project, Sharepoint Reporting & Ticket Tool: Xtraction, CA Service Desk, I-Tracker, Education Details
MBA HR and Finance Bengaluru, Karnataka RKIMS College
Senior Executive PMO
Senior Executive PMO Consultant
Skill Details
OPERATIONS- Exprience - 125 months
STAFFING- Exprience - 125 months
HR- Exprience - 79 months
PMO- Exprience - 84 monthsCompany Details
company - Ensono LLP
description - Roles &Responsiblites
Â
ÃÂ Responsible for creation of Structured reports and present the same as to Senior Deliery management as per the business requirements.
ÃÂ Design and draft various reports as per the business requirements.
ÃÂ Responsible for creation of MOM, chasing people and getting the SLA driven on time by achieving the targets and results on time.
ÃÂ Assist the Project managers in creating the RRâs Deputation, invoicings, billing activites.
ÃÂ Maintaining Clarity and Sharepoint data for service delivery management
ÃÂ Perform customer invocing at the direction of the CEM and SDM.
ÃÂ Weekly preparation of SLA and KPI data based on the manual tracker & sharing with Client & senior management.
ÃÂ Project implementation management, invoicing and billing management, and participate in establishing clientâs contractual documentation
ÃÂ Experience in various delivery models like Managed Services, Fixed Price, T&M, SLA based Risk and Penalty is required.
ÃÂ Manage the SLA targets and save penalty towards customers . Drive SLA calls with 80 plus customers with multiple towers.
ÃÂ SPOC for time on floor analysis (TOFA) report & highlighting the employee tailgating data to high level management
ÃÂ Ensure for any compliance related issue and floor maintenance
àEnsure asall joining formalities and on boarding activities for new employees.
ÃÂ Identify and drivekey metrics like Billing efficiency, Resource Utilization.
ÃÂ Maintain the project library, filing, recording and reporting systems.
ÃÂ Monitor project progress, risks, roadblocks, and opportunities and manage communications to stakeholders.
ÃÂ Develop Flow charts /SOPâs ad maintain the process changes database& monitor the severity calls.
ÃÂ Prepare Monthly reports Operational report, Capacity/utilization report, Timesheet report, SLA compliancereport. Quarterly report Operational report (quarter trends)
Internal report Allowances, Billing reports, Repository maintenance of documents.Create project/ sub-project plans & monitor progress against schedule, Maintain risk & issue logs
ÃÂ Actively participate in the project management communities
ÃÂ Responsible for Project Cost, Schedule, Budget, Revenue& Milestone Progress.
company - Wipro Technology
description - Roles &Responsiblites
Â
ÃÂ Responsible for creation of Structured reports and present the same as to Senior Deliery management as per the business requirements.
ÃÂ Design and draft various reports as per the business requirements.
ÃÂ Responsible for creation of MOM, chasing people and getting the SLA driven on time by achieving the targets and results on time.
ÃÂ Assist the Project managers in creating the RRâs Deputation, invoicings, billing activites.
ÃÂ Maintaining Clarity and Sharepoint data for service delivery management
ÃÂ Perform customer invocing at the direction of the CEM and SDM.
ÃÂ Weekly preparation of SLA and KPI data based on the manual tracker & sharing with Client & senior management.
ÃÂ Project implementation management, invoicing and billing management, and participate in establishing clientâs contractual documentation
ÃÂ Experience in various delivery models like Managed Services, Fixed Price, T&M, SLA based Risk and Penalty is required.
ÃÂ Manage the SLA targets and save penalty towards customers . Drive SLA calls with 80 plus customers with multiple towers.
ÃÂ SPOC for time on floor analysis (TOFA) report & highlighting the employee tailgating data to high level management
ÃÂ Ensure for any compliance related issue and floor maintenance
àEnsure asall joining formalities and on boarding activities for new employees.
ÃÂ Identify and drivekey metrics like Billing efficiency, Resource Utilization.
ÃÂ Maintain the project library, filing, recording and reporting systems.
ÃÂ Monitor project progress, risks, roadblocks, and opportunities and manage communications to stakeholders.
ÃÂ Develop Flow charts /SOPâs ad maintain the process changes database& monitor the severity calls.
ÃÂ Prepare Monthly reports Operational report, Capacity/utilization report, Timesheet report, SLA compliancereport. Quarterly report Operational report (quarter trends)
Internal report Allowances, Billing reports, Repository maintenance of documents.Create project/ sub-project plans & monitor progress against schedule, Maintain risk & issue logs
ÃÂ Actively participate in the project management communities
ÃÂ Responsible for Project Cost, Schedule, Budget, Revenue& Milestone Progress.
company - Wipro InfoTech
description - Responsibilities
⢠Monitor and manage the headcount actual Vs plan for the region to maintain the headcount ratio with the revenue.
⢠Maintain and monitor the correct tagging in SAP (Project tagging, supervisor tagging, org unit and cost center) for the region so that the financials are maintained properly.
⢠Responsible in providing the exact and accurate headcount report for GM calculation.
⢠Responsible in managing the bench management and deploy the resource.
⢠Responsible in managing and driving tenure management for the eligible employee and deploy them according to their aspiration and business need.
⢠Responsible in Hiring and maintaining the Rookie Ratio for the location and actively track their training and deploy them.
⢠Analyze past volume and staffing patterns and will implement the actions based on the forecast provided so that the resource crunch can be addressed and the make sure the availability of the resources on time for go live.
⢠Validate the head count plan for the project and work with Stake holders (Service Delivery Managers) in optimizing the resources.
⢠Ensure all required WFM data is tracked and trended on a continuous basis by the NLD team.
⢠Identify the resource that had completed tenure with the project and plan their training with the help of training team and elevate them to higher roles and back fill the same with the ROOKIE'S (TRB, TE, WIMS, and SIMS)
⢠Interface with Service Delivery Managers/Director as needed for escalation on service impacting issues due to resource availability.
⢠Coordinates with stake holders of Operations to interface with client and handle account management issues and add resources as per the requirement.
⢠Manages the staff schedules and responsibilities of Workforce Management team for the Region/BU.
⢠Prepare daily/weekly/monthly reports and distribute to the Management team.
⢠Manages staffing ratios and seat utilization/optimization to ensure Project goals are met. Builds effective working relationships with internal departments.
⢠Take care of special projects (PWD) and Rookie hiring model, Training, deployment.
PERSONAL DETAIL
DOB: 21/03/1986
PAN: AWVPB7123N
Passport: J1409038
Linguistic Ability: English, Hindi, Marathi, Kannada and Konkani
Location: Pune, India
Marital Status: Married |
PMO | Skills Exceptional communication and networking skills Successful working in a team environment, as well as independently Ability to work under pressure and multi-task Strategies & Campaigns Corporate Communications MIS Reporting & Documentation Training & Development Sales Support & Back Office Operations New Process Development & Launch Handling customer escalationsEducation Details
BACHELOR OF BUSINESS ADMINISTRATION BUSINESS ADMINISTRATION ICFAI Business School
Integrated Institute Of Management &Technology
HIGHER SECONDARY SCHOOL, B.I.S.S School
Delhi, Delhi SENIOR SECONDARY SCHOOL, Delhi Public School
Senior Manager - PMO
Skill Details
TRAINING- Exprience - 30 months
DOCUMENTATION- Exprience - 16 months
OPERATIONS- Exprience - 16 months
SALES- Exprience - 8 months
CORPORATE COMMUNICATIONS- Exprience - 6 monthsCompany Details
company -
description - Review and understand existing business processes to identify functional requirements to eliminate
waste, improve controllership and deliver flexibility
Identify processes for re-design, prototype potential solutions, calculate trade-offs, costs, and suggest a
recommended course of action by identifying modifications to the new/existing process
Project Management of new requirements and opportunities for applying efficient and effective solutions
Responsible for delivering process reengineering projects across processes by closely working with the relevant businesses and operations units.
Responsible for documentation to train all stakeholders on any changes
company -
description - Responsible for defining scope of project in accordance with the stakeholders, internal teams and senior
management team.
Prepare project charter with defined timelines for project related activities.
Preparation of Business Requirement Document (BRD), closing Understanding Document (UD) with development team, UAT completion and deployment.
Preparation of training documents, SLAs, SOPs etc. as required.
Conduct training for impacted teams to ensure smooth transition.
company - TELEPERFORMANCE INDIA
description - Driving sales through call center and achieve target with overall responsibility of exploring selling opportunities by understanding customer preferences and requirements.
Conceptualizing and implementing sales promotional activities as a part of pilot batch for new company launch.
Training new joiners through the process of call barging.
Interaction with client to understand requirements and expectations.
Handling call quality sessions with the client.
Handling adhoc requirements from client as well as senior management and delivering timely resolution for the same.
MASTER OF BUSINESS ADMINISTRATION |
Database | TECHNICAL EXPERTISE ⢠DB Languages: SQL ⢠Database Tools: SQL Server 2014/ 2017 Postgresql 9.5, 9.6, Oracle 11gR2 ⢠Operating Systems: Redhat Linux, Oracle Linux, Windows Server 2012/ 2016 OTHER TECHNICAL SKILLS ORACLE 11G R2 ⢠Proficient in Oracle Database Software Installation, Creation of Database using GUI/Silent DBCA, Architecture, File management, Space Management, User Management, Creating Roles and assigning Privileges/Roles in 11gR2 and troubleshooting them. ⢠Hands on experience Control files/Redolog/Archive/Undo Management ⢠Configuring Listener.ora/Tnsnames.ora file using Netmgr/netca ⢠Generating AWR reports, ADDM, ASH reports to diagnose the problems ⢠Database Backup, Cloning/Duplicate using hot & cold backups using RMAN. ⢠Knowledge in Flashback Technologies & Expdp/Impdp ⢠Implemented Oracle11gR2 RAC on Oracle Linux Platform and knowledge of services for troubleshooting RAC (CRSCTL, SRVCTL) ⢠Knowledge on installation and configuration of RAC. Add/Remove Nodes on RAC ⢠Configuration of physical standby database (Data guard) ⢠Successfully upgraded from 11.2.0.1 to 11.2.0.4 & PSU patching using O patch. STRENGTHS ⢠Good Communication skills. ⢠Self-confident and can adapt myself to all work environments. ⢠Enjoy responsibilities as lead and team player. ⢠Patient listener & quick learner. ⢠Capable of explaining issues & solving them.Education Details
B.E Computer Engineering Mumbai, Maharashtra Mumbai University
Higher Secondary Certificate Dr. DY Patil Jr College
Database Administrator
Database Administrator - DBA in Marketplace Technologies Ltd
Skill Details
DATABASE- Exprience - 61 months
BACKUPS- Exprience - 48 months
LINUX- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
SQL- Exprience - 48 monthsCompany Details
company - DBA in Marketplace Technologies Ltd
description - Project Title: EBoss, Datafeed, MFDB, RTRMS, IndiaINX
company - Standard & Enterprise
description - Redhat Linux 7.4, Postgresql 9.5, 9.6
Duration: Feb 2017 - till date
Description: Bombay Stock Exchange BSE is Asia's first & the Fastest Stock Exchange in world with the speed of 6 micro seconds and one of India's leading exchange groups provides an efficient and transparent market for trading in equity, currencies, debt instruments, derivatives, mutual funds. BSE SME is India's largest SME platform which has listed over 250 companies and continues to grow at a steady pace.
JOB ROLES & RESPONSIBILITIES
POSTGRESQL - ⢠Worked on Redhat Linux OS Cluster with Postgresql for High Availability (HA) using Pacemaker.
⢠Coordinated with Developers/Linux teams for database knowledge and support.
⢠Participated in implementation of new releases into production.
⢠Installed /Configured Postgresql from source or packages on Redhat Linux servers.
⢠Performed Postgresql Server Management tasks i.e. Backup & Restore, Configuration, Roles, Blockings, Tablespace creation and Troubleshooting.
⢠Worked with Storage team for Disaster Recovery DR setup built on SAN using EMC technology ⢠Configured LDAP authentication & GSSAPI Authentication from Windows to Linux for Postgresql.
⢠Configured logical replication for Database servers, hot standby Postgresql servers, faster database backup methods, schema and tablespace backups.
⢠Configured maximum connections to database on Linux servers.
⢠Installed tds_fdw from source for linked servers to connect to heterogeneous databases & other required extensions, backup configuration, PITR using base backups.
MSSQL - ⢠Day-to-day administration of live SQL Servers.
⢠Participated in Live Primary Recovery PR & Disaster Recovery DR activities.
⢠Participated in PR & DR mocks for new releases into production.
⢠Configured Linked Servers, Transactional replication, Maintenance tasks like database backup & restore, recovery, scheduled jobs, maintenance plans.
⢠Installed & Configured SQL server 2014, 2017 standalone and SQL Cluster servers.
⢠Maintained the security of the database by providing appropriate SQL roles, logins and permissions to the users on demand.
⢠Worked with teams on application rollouts, application issues and SQL server migrations.
⢠Exposure in handling production system with skills and understand client's requirement.
⢠Performed SQL Server service pack upgrades and hot fixes.
⢠Handled multiple SQL Instances on Windows SQL Cluster environment built on EMC SAN.
⢠Worked on MSSQL DB clusters with active/active & active passive servers, Always-On Availability Groups (AAG) and HA/DR Setup.
⢠Have experience on SAN and RAID levels and building and supporting SQL Cluster servers on SAN Environments.
company - BSE Bombay Stock Exchange
description - Environment: Windows server 2008 R2, 2012 R2, 2016 Enterprise & Standard, |
Database | Technical Expertise Operating Systems Microsoft Window Server 2003/2008/2008 R2/2012 Database Technologies SQL Server, Sybase ASE Server, Oracle, MongoDB Monitoring and Ticketing Tools HP Service Manager 7.0/9.0, Solar winds DPA, JIRA and MongoDB OPS manager Web Server IIS 7.0 Database Tools SSMS, DBArtisan, Studio 3T, SnapShot Manager for SQL ServerEducation Details
B. Tech Computer Science Gulbarga, Karnataka PDACOE, Gulbarga, Autonomous Institution
Database Administrator II
Database Administrator III - BNY Mellon International Operations (India) PVT. LTD
Skill Details
Sql Dba- Exprience - Less than 1 year monthsCompany Details
company - BNY Mellon International Operations (India) PVT. LTD
description - SQL Server :
ï Installation, configuration of database servers using slipstream and setup all the maintenance jobs as per the standard policy on standalone as well as cluster environments with latest service packs
ï Installation of SSRS, uploading of .rdls and assigning correct data sources to reports. Grant necessary access to users & developers on reporting website. Aware of SSIS and designing packages as well.
ï Create and manage logins, users for database applications, assigning permissions as per requests, resolving user login issues.
ï Migration of all SQL server 2005/2008 servers to higher versions.
ï Setup of database refresh jobs on QA, DEV and UAT environments and fixing orphaned users.
ï Troubleshoot performance related issues.
ï Part of multiple projects to work with developers and provide all required support for testing in QA, UAT & DEV environment.
ï Lead the DR tests for database team.
ï Participate in database purge and archive activities.
ï Writing codes for automating database administration tasks.
ï Worked on automating DR tasks to start the agent jobs on multiple servers, restore databases for log shipped databases without manual intervention for online databases post DR activities.
ï Provide support to vendor databases, follow up with the vendor calls and timely escalate to next level when there is no update in predefined timeline.
ï Installation and configuration of smsql on windows server. Schedule jobs for creation and deletion of clones on sql server. Maintain backups using smsql.
MongoDB Server:
ï Installation and configuration of MongoDB server.
ï Creation of databases and collection.
ï Creation new user and grant access using Ops manager.
ï Monitor database servers using Ops manager.
Oracle & Sybase Server
ï Managing and maintaining multiple instances of Databases on Linux and windows servers.
ï Monitoring daily jobs includes backups, refresh and maintenance jobs.
company - Hewlett-Packard India Sales PVT. LTD. On the payroll of Softenger India PVT. LTD
description - ï Installation of SQL Server on standalone as well as windows cluster environments with latest service packs
ï SQL server installation using slipstream.
ï Installation of reporting services
ï Creating logins and users, assigning permissions as per requests.
ï Security audit for all logins includes maintenance of unused and orphan user logins
ï Create & Maintain daily and weekly jobs/maintenance plans includes backup, index rebuild/reorganize , update statistics and database consistency check
ï Create linked servers and ensure connectivity between servers
ï Monitor disk space proactively & Space management using data and log file shrinking
ï Monitor blocking, deadlocks, open transactions and slow running queries during performance issues and highlight costly queries to developers.
ï Configure alerts for deadlock and blocking to maintain performance
ï Implementing high availability technologies like log shipping, AlwaysON, mirroring and its troubleshooting, also have knowledge on replication
ï Successfully completed migration of Databases from one server to another
ï Performing DR drills (Online/Offline) on quarterly basis
ï Power shell scripting to monitor, restart SQL service and get Email alert for the service status.
ï Maintain TNS entries for oracle client as per client requests.
ï Interacting with customers for requirements
ï Contacting customer to update the status of handling issues and service requests at every stage of resolution
ï Managing proper escalation and notification matrix for all support levels |
Database | TECHNICAL SKILLS Operating Systems MS Windows Server 2012/2008/XP Software and Tools MS LiteSpeed, Idera SQL Safe, SSMS, Upgrade Advisor, SQL Server Profiler, SCOM, Diagnostic Manager, Remedy, Jira, Infopacc, Tivoli TDP backup tool, SQL Pack DatabasesMS SQL Server 2016/2014/2012/ 2008 R2/ 2008, Oracle 10g, Netezza Microsoft azure Education Details
Masters of Science Computer Science Pune, Maharashtra Indira College, Pune University
Lead database administrator
Microsoft Certified Professional with 11 years of experience in database administration on MS SQL Server 2016/2014/2012/2008 R2/ 2008
Skill Details
MS SQL SERVER- Exprience - 110 months
Microsoft azure- Exprience - Less than 1 year months
Always on availabiity group- Exprience - Less than 1 year months
Database mirroring- Exprience - Less than 1 year months
Performance tuning- Exprience - Less than 1 year months
Log shipping- Exprience - Less than 1 year months
Installation , upgrade, migration and patching- Exprience - Less than 1 year monthsCompany Details
company - Ensono
description - Employment transfer as a part of project acquisition to Ensono from Wipro.
SQL Server Database Administration
company - Wipro Technologies
description - Microsoft Certified Professional with 11 years of experience in database administration on MS SQL Server 2016/2014/2012/2008 R2/ 2008.
Experience with MS SQL Server 2016/2014/2012/2008 R2/ 2008 installation, upgrade, and administration
Microsoft Azure certified.
Have understanding of Azure VM, Azure Storage, Azure network, Azure AD and Azure SQL database.Â
Incident management, change management and Problem management for SQL Server Database team.
Participating in meetings, conference calls with client, Service Delivery Manager and Application team for System improvements.
Participated in quarterly DR activity.
Involved in creation of SIP - Service Improvement Plans
Involved in handling of high severity issues and provided RCA for the same.
Worked on Always on availability groups, database mirroring, replication, clustering and log shipping.
Have basic understanding of Oracle and Netezza.
Provided on- call support during out of office hours and weekends.
Resource & shift management of 5 SQL DBAs from offshore in multi-client environment for Data center services.
Provided KT to team members, monitor and guide trainees.
company - Wipro Technologies
description - Responsibilities: ⢠MS SQL Server 2016/2014/2012/ 2008 R2/ 2008 installation, configuration, and administration.
⢠Worked on Always on availability groups, log shipping, database mirroring and clustering.
⢠Participated in PCI scan report to perform installation of security hot fixes, service packs for SQL servers to remove vulnerability.
⢠Participated in Holmes BOTS automation implementation of SQL Pack tool.
⢠Worked on service requests, incidents and critical issues.
⢠Involved in conference calls to provide DBA support for critical issues.
⢠Performance tuning.
Environment: SQL Server 2016/2014/2012/2008R2/2008, Windows Server 2012/2008R2/2008
company - Mphasis
description -
company - Mphasis
description - Responsibilities: ⢠MS SQL Server 2012/ 2008 R2/ 2008 installation, configuration, and administration.
⢠Worked on Always on availability groups, log shipping, database mirroring and clustering.
⢠Performed SQL server patching activity ⢠Worked on daily reports like cluster failover, backup, AG/LS/Mirror report and server disk space report.
⢠Worked on service requests, incidents and critical issues.
⢠Participated in quarterly DR activity.
⢠Involved in conference calls to provide DBA support for critical issues.
⢠Provided support to windows team during patching for AG-mirror-cluster failover/failback and database health check.
⢠Performed all the health checks for market open servers and provided update in market open call ⢠Deeply involved in resolution of the issue and finding the root cause analysis of the issue ⢠Performance tuning.
Environment: SQL Server 2012/2008R2/2008, Windows Server 2008R2/2008
company - Synechron Technologies Pvt. Ltd
description - Responsibilities: ⢠SQL server, Oracle and Netezza databases support tasks.
⢠MS SQL Server 2008 R2/ 2008 installation, upgrade, and administration.
⢠Done capacity planning for database growth for all SQL servers.
⢠Troubleshooting alerts.
⢠Worked on log shipping and mirroring.
Environment: SQL Server 2008R2/2008, Windows Server 2008R2/2008, Oracle 10g/RAC
company - Synechron Technologies Pvt. Ltd
description -
company - Synechron Technologies Pvt. Ltd
description - Responsibilities: ⢠Pursued in-depth training on Oracle 11g Architecture and SQL Server.
Environment: SQL Server 2008R2/2008, Windows Server 2008R2/2008, Oracle 10g
company - Synechron Technologies Pvt. Ltd
description - Responsibilities: ⢠Carried out version changes for schemas from PE8 version to EE11 version as per the process given
Environment: Oracle 11g
company - Mastek Ltd
description - Responsibilities: ⢠SQL Server 2008 R2/ 2008 installation, upgrade, and administration ⢠database backup/restore.
⢠Performed MS SQL Server audits ⢠Worked with database mirroring, replication, log shipping and clustering.
⢠Supported UAT and PROD environments ⢠Performed deployment document review.
⢠Carried out deployments for different applications
Environment: SQL Server 2008R2/2008, Windows Server 2008R2/2008
company - Mastek Ltd
description -
company - PP Software and Systems Ltd
description -
company - PP Software and Systems Ltd
description - Description: The system provides Master Data Management and Procurement modules for dairy industry.
Responsibilities: ⢠Designed, coded, and tested ⢠Customized ERP system as per the requirement
Environment: Core Java, PostgreSQL |
Database | SKILLSET Oracle DBA, MySQL, MARIADB, PostgreSQL Database Administration ITSKILLS SQL Oracle 10g, 11g, MYSQL, MariaDB, postgreSQL Windows, Linux Putty Education Details
January 2018 MCS Pune, Maharashtra Pune University
Database administrator
Database administrator - Infiniteworx Omnichannel Pvt. Ltd
Skill Details
DATABASE- Exprience - 17 months
MYSQL- Exprience - 17 months
ORACLE- Exprience - 17 months
SQL- Exprience - 17 months
DATABASE ADMINISTRATION- Exprience - 6 monthsCompany Details
company - Infiniteworx Omnichannel Pvt. Ltd
description - Pune Sept 2017 to Present
RESPONSIBILITIES:
⢠Creating tablespaces and planning the location of data, monitoring the tablespaces growth periodically.
⢠All replication setup
⢠Moved database Schema changes to stage.
⢠Dba support query resolution.
⢠Creating user and giving specific privileges
⢠Database management.
⢠Database recovery, moving data files to different locations.
⢠Planning the backup policies and Backup/ Recovery of databases based on the criticality.
⢠IMPORT/EXPORT.
⢠Degine schemas
Key Result Areas:
⢠Providing 24 /7 support to resolve database performance issues, Job failures, Sessions & diagnose root causes
⢠Installation, configuring and updating Oracle server software and related Oracle products. Installation, configuraing and updating Mysql, Sql server, MariaDB, MongoDB
⢠Supported multiple databases and administered Oracle Databases of Large DB Sizes for production, development & test setups.
⢠Maintaining table spaces & data files, Control files, Online Redo log files
⢠Creating Users, granting Roles & Privileges to users and managing tablespaces for different users by granting quota on Default & Temporary tablespaces.
⢠Taking Oracle RMAN Backups (Scheduling for day wise backup)
⢠Implementing the incremental, cumulative and full RMAN backup for each database to have space management and effective recovery.
⢠Logical Backup Using Export & Import/datapump Export of important tables at regular intervals.
⢠Regular checking of trace, alert log file, all ORA errors
⢠Working on incidents like User creation/deletion incidents, backup failed incidents.
⢠Checking Listener Status, connectivity Troubleshooting and fixing database listener issues.
⢠Look for any new alert / error log entries / errors, error details in Trace files generated. Executing DDL & DML scripts as per customer requirements
⢠Mentoring, coaching and appraising team members with active involvement in the recruitment process
⢠Contributing in Project Documentation and generating daily reports
⢠Ensuring compliance to quality norms and taking steps for any non-conformance Spearheading complete project activities ensuring timely completion of project
⢠Implementing security policies on different database systems with granting and revoking privileges to the users
⢠Following change management processes and participated in related meetings
⢠Verifying all Instances/DB are running, Tablespaces are online, Monitor Backround processes and status.
company - InnovativeTechnologies
description - Clients: BANKING DOMAIN |
Database | Education Details
January 2016 BSc. Mumbai, Maharashtra Mumbai University
January 2013 H.S.C. Maharashtra Board
January 2011 S.S.C. Maharashtra Board
MySQL Database Administrator
2+ Years of experience in MySQL Database Administrator ( MySQL DBA)
Skill Details
MySQL DBA , Centos , Backup , Restore , Replication , Query Optimazation- Exprience - 24 monthsCompany Details
company - Trimax IT Infrastructure & Services Ltd
description - ·       MYSQL Installation, maintenance and Upgrades (Version 5.5 , 5.6)
·       MySQL database administration on a large scale MySQL installation
·       Experience with MySQL on both Linux and Windows
·       MySQL processes, security management and queries optimization.
·       Performed query analysis for slow and problematic queries.
·       Performed Structural changes to Database like creating tables, adding columns according to business requirement
·       Creating and Maintaining Database Maintenance Plans.
·       Writing scripts to Create Jobs for Backup & Restore Plans.
·       Working on MYISAM to INNODB engine.
·       Working on Server shifting , tuning parameter , database purging
·       Working on Mysql master slave Replication
·       Handling Release management and user acceptance.
·       Restore using xtrabackup.
·       Responsibilities include monitoring daily, weekly and monthly system maintenance tasks such as database backup, replication verification, database integrity verification and indexing updates
·       Work in 24/7 production database support.
company - Trimax IT Infrastructure & Services Ltd
description - ·       MYSQL Installation, maintenance and Upgrades (Version 5.5 , 5.6)
·       MySQL database administration on a large scale MySQL installation
·       Experience with MySQL on both Linux and Windows
·       MySQL processes, security management and queries optimization.
·       Performed query analysis for slow and problematic queries.
·       Performed Structural changes to Database like creating tables, adding columns according to business requirement
·       Creating and Maintaining Database Maintenance Plans.
·       Writing scripts to Create Jobs for Backup & Restore Plans.
·       Working on MYISAM to INNODB engine.
·       Working on Server shifting , tuning parameter , database purging
·       Working on Mysql master slave Replication
·       Handling Release management and user acceptance.
·       Restore using xtrabackup.
·       Responsibilities include monitoring daily, weekly and monthly system maintenance tasks such as database backup, replication verification, database integrity verification and indexing updates
·       Work in 24/7 production database support. |
Database | TECHNICAL SKILL: Operating System LINUX, Windows Server 2012 R2, Windows 98, Windows 2000/ XP Tools & Utility Packages SQL* Loader, SQL*PLUS, OEM, Datapump, expdp/impdp, PLSQL Developer, Jenkins Database Oracle 10g, Oracle 11g, Oracle 12c Scripting UNIX Shell Scripting Language SQL Education Details
January 2011 M.B.A. Amravati, Maharashtra Amravati University
January 2007 B.C.A. Nagpur, Maharashtra Nagpur University
Oracle Database Administrator
ORACLE DATABASE ADMINISTRATOR ON LINUX/MICROSOFT WITH 4 YEARS EXPERIENCE.
Skill Details
ORACLE- Exprience - 48 months
LINUX- Exprience - 6 months
ORACLE DBA- Exprience - Less than 1 year months
RAC- Exprience - Less than 1 year months
GOLDEN GATE- Exprience - Less than 1 year months
ASM- Exprience - Less than 1 year months
DATAGUARD- Exprience - Less than 1 year monthsCompany Details
company - TIETO INDIA PVT. LTD
description - Pune From February 2015 till present
Project Profile:
Oil and Gas unit of Tieto India Pvt. Ltd. is working for Environmental Components (EC) application. Tieto is the authorized service provider in EC. Energy Components is a complete end-to-end hydrocarbon accounting solution following the hydrocarbons from production to transport, sales and revenue recognition. Globally market-leading hydrocarbon accounting software with functionality coverage exceeding other available solutions. Modern, flexible and scalable technology platform. Selected as the global standard and best practice by oil & gas super majors.
Responsibilities: ⢠Oracle Database Administration 11g R2, 12c and 18c ⢠Supporting databases in 24x7 environments and coordinate with Application, OS, Storage and Development Teams. Test and Production environments ⢠Regularly monitoring the trace files and Alert log files for database related issues.
⢠Experience in monitoring the CPU usage, IO and memory utilization at OS level.
⢠Checking the Alert log file to analyze the ORA errors if any to raise SR with Oracle.
⢠Monitoring the log files, backups, database space usage and the use of system resources.
⢠Configuring Backup (RMAN) for database and restoring database.
⢠Installation, configuring and updating Oracle server software and related Oracle products of 11g and 12C.
⢠Oracle Server installation, client installation and configuration, PLSQL developer installation.
⢠Creating database using DBCA and manually.
⢠Creating of Oracle user and granting proper privileges to user as per request.
⢠Creating AWR, ASH and ADDM reports for database performance analysis.
⢠Handling space management and performance issues in Oracle databases.
⢠Creating remote database link.
⢠Renaming and resizing of data files in Oracle database if needed.
⢠Tablespace shrinking with regular time interval to reclaim server space.
⢠Expertise in Export and Import using data pump in Oracle database.
⢠Expertise in Configuration of Listener and Tnsnames through NETMGR and NETCA and statically also.
⢠Managing Oracle Listener and Oracle Network Files.
⢠Creating user Profiles, granting specific privileges and roles to the users in Oracle database.
⢠Maintaining tablespaces & data files, Control files, Online Redo log files in Oracle database.
⢠Worked on AWS cloud services like EC2, S3, RDS, ELB, EBS, VPC, Route53, Auto, Cloud watch, Cloud Front, IAM for installing configuring and troubleshooting on various Amazon images for server migration from physical into cloud. |
Database | Technical Skills Databases: Oracle RDBMS- 10g, 11g & 12c Technology/utilities: Data Pump, RMAN, Data guard, ASM, RAC, Golden Gate Tools: OCC, PUTTY, SQLPLUS, SQL Developer, Netbackup, SCOM, SCCM, VMWare Vsphere Operating Systems: RHEL 6.0, RHEL 6.5, UNIX and Microsoft WindowsEducation Details
Database Administrator
Database Administrator - BNY Mellon
Skill Details
DATABASES- Exprience - 24 months
ORACLE- Exprience - 24 months
RMAN- Exprience - 24 months
NETBACKUP- Exprience - 24 months
SCOM- Exprience - 24 monthsCompany Details
company - BNY Mellon
description - Databases: 600+
Team Size: 8
Duration: Jan 2017 - Till Date
Clients: Over 130+ investment banking organizations who are hosted with Eagle
Responsibilities: Database Management (Support and managing critical production, Pre-production, test and reporting databases in different platforms), Capacity Management Upgrades.
⢠Handling day to day database activities monitoring and incident management.
⢠Building new databases as per the requirement and prepare them for go live with the help of multiple teams.
⢠Working on scheduled activity of database patching (CPU, PSU) ⢠Installing latest path on production, Dev and Test databases as per the suggestion from Oracle support.
⢠Database Upgrade from 11g and to 12c.
⢠Adding disks to ASM disk groups.
⢠Building DR database using Active Data guard, Make it sync with prod and resolving issues if persists ⢠Data Guard Management- Checking lagging status, removing lagging of archives, checking processes like RFS/MRP, Archives Management ⢠Working on tablespace related issues ⢠Managing user access and profiles ⢠Importing and exporting using datapump ⢠Maintaining inventory of all databases in the single centralize database ⢠Refreshing test environment from production database.
⢠Working with Oracle Support to resolve oracle errors.
⢠Schedule daily and weekly databases backup using RMAN, Troubleshooting in RMAN issues.
⢠Database cloning using RMAN.
⢠Take part in cutover to upgrade application to higher version.
⢠Strictly following ITIL process in incident management and change management.
⢠Providing weekly report of issues in team meeting also participating and suggesting service improvement plans.
⢠Database Migrations from one server to another or to different platforms ⢠RCA and impact analysis reporting during any production outage.
Previous Organization: Brose India
Project I: Central IT Management
company -
description - Responsibilities: Managing our internal databases and servers of Brose global.
⢠Providing 24x7 on-call support in the rotational shifts.
⢠Performing day-to-day database activity ⢠Monitoring and responding DBA group Mails for all alerts, issues and ad-hoc business user requests, etc.
⢠Database creation, patching ⢠Backup of Database in frequent cycles using Data pump/RMAN.
⢠Database refreshes using RMAN, Datapump.
⢠Recovery using copy of data / RMAN ⢠Monitoring logs and trace for resolving issues.
⢠Creating new VM servers and prepare it for go live, Also decommissioning as per requirements.
⢠Weekly patching of windows servers using SCCM and taking actions for patching if needed ⢠Monitoring and troubleshooting of daily and weekly OS backup using Symantec Netbackup ⢠Managing user accounts of OS users and database users ⢠Monitoring OS level alerts using SCOM
Project II: Data Center Migration (Onsite Project)
Responsibilities: Data center migration was one of activity for migration of our datacenter from one location to another. Where our all servers and databases were moved successfully.
⢠Installation of Oracle 11g on Linux platforms ⢠Worked on service requests (Incidents / Change / Request) ⢠Creation of users, managing user privileges ⢠Configured RMAN backup for databases ⢠Patching of databases ⢠Configuring physical standby database using Dataguard ⢠Cloning of servers and migrate to another cluster
ACADEMIA / PERSONAL DETAILS ⢠Bachelor of Engineering (B.E.) in Computer Science and Engineering From SGBAU Amravati University, Amravati in 2014 with CGPA of 7.21
Current Address:- Mr. Yogesh Tikhat, C/O: Raj Ahmad, Flat# G2-702, Dreams Aakruti, Kalepadal, Hadapsar, Pune - 411028
Highest Qualification BE (cse)
PAN: - AOFPT5052C |
Database | Software Skills: * RDBMS: MS SQL SERVER 2000/2005/2008 & 2012, 2014 * Operating Systems: WINDOWS XP/7, WINDOWS SERVER 2008, 12 * Fundamentals: MS Office 03/07 * Tools: SSMS, Performance Monitor, Sql profiler, SQL lite speed. Company name: Barclays Technology Centre India. Team Size: 24 Role: Database Administrator Support Description: Barclays Technology is a UK based retail & invest bank and 300 years of old bank.. It has operations in over 40 countries and employs approximately 120, 000 people. Barclays is organised into four core businesses: Personal & Corporate (Personal Banking, Corporate Banking, Wealth & Investment Management), Barclaycard, Investment Banking. Responsibilities: â Attending various calls from all over the world on various database issues. â Working on Web Gui alerts and resolving incident tickets within the time lines. â Troubleshoooting log shipping issues and fixing the related alerts. â Identifying and Resolving Blocking and locking related issues. â Configuration and monitoring Replication, Log shipping and mirroring setup. â Working on replication issues and Always ON issue. â Granting and revoking permissions on various account provisioning tasks. â Working on call support during the weekend and performing DR test's. and working on weekly maintenance jobs and weekend change requests. Education Details
B.Sc. Maths Kakatiya University Board secured
SQL server database administrator
Database administrator
Skill Details
DATABASE- Exprience - 120 months
DATABASE ADMINISTRATOR- Exprience - 72 months
MAINTENANCE- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
REPLICATION- Exprience - 48 monthsCompany Details
company - Barclays global services centre
description - SQL server databases implementation and maintenances
Log shipping, replication, High availability, clustering, performance tuning, database mirroring, Installation, configuration, upgradation, migration
company - Wipro Infotech Pvt Ltd
description - SQL server database administrator
company - CITI Bank
description - Worked as Database Support at Accord Fintech, Sanpada from Sep 2008 to 2013 Feb.
company -
description - 2012.
⢠Sound knowledge in Database Backup, Restore, Attach, and Detach and Disaster Recovery procedures.
⢠Developed backup and recovery strategies for production environment.
⢠Ensuring data consistency in the database through DBCC and DMV commands.
⢠Experience in query tuning and stored procedures and troubleshooting performance issues.
⢠Having hands on experience in DR process including log shipping and database mirroring.
⢠Experience in scheduling monitoring of Jobs.
⢠Experience in configure and troubleshooting in Always ON.
⢠Creating and Maintaining of Maintenance Plan.
⢠Expertise in planning and implementing MS SQL Server Security and Database permissions.
⢠Clear understanding of Implementation of Log Shipping, Replication and mirroring of databases between the servers.
⢠Performance Tuning (Performance Monitor, SQL Profiler Query Analyzer) ⢠Security for server & Databases Implementing security by creating roles/users,
Added users in roles, assigning rights to roles.
⢠Create and maintaining the linked servers between sql Instances.
⢠Create and maintaining and Database mail.
⢠Monitor and troubleshoot database issues.
⢠Creating DTS packages for executing the required tasks.
⢠Experts in create indexes, Maintaining indexes and rebuilds and reorganizes.
⢠Daily Maintenance of SQL Servers included reviewing
SQL error logs, Event Viewer. |
Database | Areas of Expertise ⢠Oracle Databases 12c, 11g, 10g ⢠Weblogic 12c, 11g ⢠Grid Infrastructure ⢠RMAN ⢠ASM ⢠Middleware: OIM, OAM, SOA ⢠Shell Scripts ⢠DataGuard ⢠Web servers - OHS, Apache ⢠Architecture Designs ⢠Proof of Concepts ⢠DevOpsEducation Details
January 2007 Bachelor of Engineering Information Technology Sangli, Maharashtra Walchand College
January 2004 Diploma Computer Engineering Jalgaon, Maharashtra Govt. Polytechnic
Lead Database Administrator
Lead Database Administrator - Tieto Software
Skill Details
DATABASES- Exprience - 108 months
MIDDLEWARE- Exprience - 96 months
RMAN- Exprience - 84 months
SHELL SCRIPTS- Exprience - 48 monthsCompany Details
company - Tieto Software
description - As a part of AO (Application Operations) team, scope in project is quite wide than typical database administration. Range of accomplishments are extended right from Data Tier to Middle Tier & Application Tier:
- Maximized availability of applications from 99.3% to 99.8%
- Raised business by presenting Proof of Concepts for 10+ cases
- Delivered upgrades of various applications time to time to keep it on supported platform
- Saved SLAs as per contract by means of handling P1, P2 issues effectively
- Produced Capacity reports comprising all layers (Data, Middleware, Web) of various applications
- Generated Work Orders as per customer need
company - Tieto Software
description - - Designed databases of various applications
- Planned RMAN backup and recovery, BCP strategies
- Executed Business Continuity Testing for various applications
- Introduced Zero Cost high availability solutions - Active-Passive Failover
- Optimized performance by means of scripting automation
- Established cloning procedures for all layers of various applications
- Delivered Infrastructure changes, like FW Openings & LoadBalancer configuration for new applications
- Eliminated downtime by troubleshoot issues for Middleware products - OIM, OAM, SOA
- Contributed to build & maintain Integration Layer- SMTP, ftp, Reverse Proxy, OCM
company - Tieto Software
description - - Provided database support to environments - PROD, UAT, TEST, DEV
- Performed Database Refresh/Cloning from production to development and support databases
- Reduced risk level by means of upgrading & patching databases time to time
- Protected databases by assigning appropriate roles and privileges as per SOD
- Generated & maintained Middleware schemas using RCU
- Exceeded scope of work by supporting & maintaining WebLogic platform - installation, patching, troubleshooting issues
- Expanded duty scope to web servers: Install & maintain- OHS, apache, tomcat
company - HSBC Software
description - Being part of project supporting HSBC Bank France, I achieved:
- Handled incidents & service requests as Day to day database administration tasks
- Delivered basic implementation services - Database installation, patching, upgrades
- Performed capacity planning - managing tablespaces, compressions
- Contributed in maintaining quality of databases - managing instances, indexes, re-organization, performance monitoring & tuning using AWR, ADDM reports
- Maintained backups & recovery of database - logical backups (exp/imp), datapump (expdp/impdp), cold backups, hot backups, RMAN backup/restore, RMAN Duplication
- Reduced efforts by automation - Value add initiatives which includes writing shell scripts for automated housekeeping operations, scheduling backups, use crontab/at to schedule tasks
- Implemented high availability solutions - Dataguard |
Database | Education Details
May 2011 to May 2014 Bachelor of science Information technology Mumbai, Maharashtra Mumbai university
Oracle DBA
Oracle database administrator
Skill Details
Installation of Oracle on RH Linux & Windows. Creating/Managing user profiles and analyzing their privileges and tablespace quotas Backup of database Logical and Physical procedures. Recovery of database in case of database crash, disk/media failure, etc. Standard DBA functions like space management, Rollback segments, Extents. Database Management and Monitoring the database. Willing to learn new things. Being a constructive team member, contributing practically to the success of the team.- Exprience - 48 monthsCompany Details
company - Accelya kale solutions ltd
description - Database Administrator working in 24*7 support environment maintaining Databases running on Oracle 11g, 12c.
Database Up-gradation from Oracle 11g to Oracle 12c.
Installation of Database critical patches.
Taking cold and hot backups on scheduled times and monitoring backups.
Importing the export dump to another database as per demands.
Automating most of the daily activities through cronjobs, shell scripts or schedulers.
Making Plan of Actions for Various Activities.
Raising SR with Oracle Support for different severity issues.
Handling the Userâs request and proper client interaction.
Monitoring & managing database growth, tablespaces, adding ,resizing and renaming the datafiles.
Restoration of database using RMAN backups for backup consistency checks.
Migration of Database using export / import and RMAN backups.
Configuring & managing Physical Standby database.
Creating database links, Tablespaces, database directories.
Managing network settings through listener.ora and tnsnames.ora files.
Restoration of data using old logical backup as per client request.
Schema replication across databases through data pump tool.
Taking cold and hot backups on scheduled times and monitoring backups
Taking EXPDP of database, database objects and a particular schema
Using SCP ticketing tool in order keeping track of client requests.Â
Performing Maintenance Activities such as Index Rebuilding and stats gather.
Troubleshooting the Basic Level performance issuesÂ
Setting up a new environment from database perspective within the requested timelines
Adding/Deleting disks in ASM and monitoring the ASM diskgroups.
Creating users & privileges with appropriate roles and levels of security.Â
Database Administrator working in 24*7 support environment maintaining Databases running on Oracle 11g, 12c.
Performing database online and offline database re-organization for database enhancement.Â
Migrating database from Non-ASM to ASM file system.
Grid up-gradation from 11g to 12C.
company - Insolutions Global Ltd
description - Oracle software installation(graphical/silent),Database upgrade,Patch upgrade.
Maintaining around 80+ UAT DB servers, 40 production DB and 28 standby/DR DB.
Managing/creating DR & standby servers, DB sync.
Backup and recovery (RMAN/ Datapump).
Performing activities like switchover and failover .
Allocating system storage and planning future storage requirements for the database system
Enrolling users and maintaining system security.
Monitoring Alert log, Snap ID generation, db size, Server space, OEM reports, User validity.
Controlling and monitoring user access to the database .
Scheduling shell scripts or dbms_jobs using Crontab or DBMS_SCHEDULER (monitoring script, listener check, backup script, AWR reports) etc.
Planning for backup and recovery of database.
Managing the production database for Oracle and SQL Server and resize the space of database/Datafiles/Tablespace/Transactional Logs.
Managing Temp and Undo tablespaces.
Creating primary database storage structures (tablespaces) after application developers have designed an application. |
Database | TECHNICAL SKILLS ⢠SQL ⢠Oracle v10, v11, v12 ⢠R programming, Python, linear regression, machine learning and statistical modelling techniques(obtained certification through Edvancer Eduventures training institute) KEY SKILLS ⢠Multitasking, working to meet client SLA in high pressure scenarios, handling sensitive clients along with improved skills at being a team player. ⢠Excellent communication skills and quick learner. ⢠Leadership qualities, team networking and courage to take up the problems proactively.Education Details
June 2012 Sadvidya Pre-University College
Application Database Administrator-DBMS (Oracle)
Application Database Administrator-DBMS (Oracle) - IBM India Pvt Ltd
Skill Details
CLIENTS- Exprience - 30 months
MACHINE LEARNING- Exprience - 30 months
ORACLE- Exprience - 30 months
SQL- Exprience - 30 months
EXCELLENT COMMUNICATION SKILLS- Exprience - 6 monthsCompany Details
company - IBM India Pvt Ltd
description - Client: Blue Cross Blue Shield MA: Massachusetts Health Insurance
⢠Used Oracle SQL to store and organize data. This includes capacity planning, installation, configuration, database
design, migration, security, troubleshooting, backup and data recovery.
⢠Worked with client databases installed on Oracle v10, v11, v12 on a Linux platform.
⢠Proficient communication with clients across locations facilitating data elicitation.
⢠Handling numerous business requests and solving them diligently within the given time frame and responding quickly and effectively to production issues within SLA.
⢠Leading a team in co ordination with business to conduct weekly checkouts of the database servers and systems
IBM Certifications
Statistics 101, Applied Data Science with R, Big Data Foundations, Data Science Foundations
Business Analytics Certification (Pune)
Worked on Retail and Banking projects, to design a predictive business model using machine learning techniques in
R programming for an efficient business and marketing strategy. |
Database | TECHNICAL EXPERTISE ⢠DB Languages: SQL ⢠Database Tools: SQL Server 2014/ 2017 Postgresql 9.5, 9.6, Oracle 11gR2 ⢠Operating Systems: Redhat Linux, Oracle Linux, Windows Server 2012/ 2016 OTHER TECHNICAL SKILLS ORACLE 11G R2 ⢠Proficient in Oracle Database Software Installation, Creation of Database using GUI/Silent DBCA, Architecture, File management, Space Management, User Management, Creating Roles and assigning Privileges/Roles in 11gR2 and troubleshooting them. ⢠Hands on experience Control files/Redolog/Archive/Undo Management ⢠Configuring Listener.ora/Tnsnames.ora file using Netmgr/netca ⢠Generating AWR reports, ADDM, ASH reports to diagnose the problems ⢠Database Backup, Cloning/Duplicate using hot & cold backups using RMAN. ⢠Knowledge in Flashback Technologies & Expdp/Impdp ⢠Implemented Oracle11gR2 RAC on Oracle Linux Platform and knowledge of services for troubleshooting RAC (CRSCTL, SRVCTL) ⢠Knowledge on installation and configuration of RAC. Add/Remove Nodes on RAC ⢠Configuration of physical standby database (Data guard) ⢠Successfully upgraded from 11.2.0.1 to 11.2.0.4 & PSU patching using O patch. STRENGTHS ⢠Good Communication skills. ⢠Self-confident and can adapt myself to all work environments. ⢠Enjoy responsibilities as lead and team player. ⢠Patient listener & quick learner. ⢠Capable of explaining issues & solving them.Education Details
B.E Computer Engineering Mumbai, Maharashtra Mumbai University
Higher Secondary Certificate Dr. DY Patil Jr College
Database Administrator
Database Administrator - DBA in Marketplace Technologies Ltd
Skill Details
DATABASE- Exprience - 61 months
BACKUPS- Exprience - 48 months
LINUX- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
SQL- Exprience - 48 monthsCompany Details
company - DBA in Marketplace Technologies Ltd
description - Project Title: EBoss, Datafeed, MFDB, RTRMS, IndiaINX
company - Standard & Enterprise
description - Redhat Linux 7.4, Postgresql 9.5, 9.6
Duration: Feb 2017 - till date
Description: Bombay Stock Exchange BSE is Asia's first & the Fastest Stock Exchange in world with the speed of 6 micro seconds and one of India's leading exchange groups provides an efficient and transparent market for trading in equity, currencies, debt instruments, derivatives, mutual funds. BSE SME is India's largest SME platform which has listed over 250 companies and continues to grow at a steady pace.
JOB ROLES & RESPONSIBILITIES
POSTGRESQL - ⢠Worked on Redhat Linux OS Cluster with Postgresql for High Availability (HA) using Pacemaker.
⢠Coordinated with Developers/Linux teams for database knowledge and support.
⢠Participated in implementation of new releases into production.
⢠Installed /Configured Postgresql from source or packages on Redhat Linux servers.
⢠Performed Postgresql Server Management tasks i.e. Backup & Restore, Configuration, Roles, Blockings, Tablespace creation and Troubleshooting.
⢠Worked with Storage team for Disaster Recovery DR setup built on SAN using EMC technology ⢠Configured LDAP authentication & GSSAPI Authentication from Windows to Linux for Postgresql.
⢠Configured logical replication for Database servers, hot standby Postgresql servers, faster database backup methods, schema and tablespace backups.
⢠Configured maximum connections to database on Linux servers.
⢠Installed tds_fdw from source for linked servers to connect to heterogeneous databases & other required extensions, backup configuration, PITR using base backups.
MSSQL - ⢠Day-to-day administration of live SQL Servers.
⢠Participated in Live Primary Recovery PR & Disaster Recovery DR activities.
⢠Participated in PR & DR mocks for new releases into production.
⢠Configured Linked Servers, Transactional replication, Maintenance tasks like database backup & restore, recovery, scheduled jobs, maintenance plans.
⢠Installed & Configured SQL server 2014, 2017 standalone and SQL Cluster servers.
⢠Maintained the security of the database by providing appropriate SQL roles, logins and permissions to the users on demand.
⢠Worked with teams on application rollouts, application issues and SQL server migrations.
⢠Exposure in handling production system with skills and understand client's requirement.
⢠Performed SQL Server service pack upgrades and hot fixes.
⢠Handled multiple SQL Instances on Windows SQL Cluster environment built on EMC SAN.
⢠Worked on MSSQL DB clusters with active/active & active passive servers, Always-On Availability Groups (AAG) and HA/DR Setup.
⢠Have experience on SAN and RAID levels and building and supporting SQL Cluster servers on SAN Environments.
company - BSE Bombay Stock Exchange
description - Environment: Windows server 2008 R2, 2012 R2, 2016 Enterprise & Standard, |
Database | Technical Expertise Operating Systems Microsoft Window Server 2003/2008/2008 R2/2012 Database Technologies SQL Server, Sybase ASE Server, Oracle, MongoDB Monitoring and Ticketing Tools HP Service Manager 7.0/9.0, Solar winds DPA, JIRA and MongoDB OPS manager Web Server IIS 7.0 Database Tools SSMS, DBArtisan, Studio 3T, SnapShot Manager for SQL ServerEducation Details
B. Tech Computer Science Gulbarga, Karnataka PDACOE, Gulbarga, Autonomous Institution
Database Administrator II
Database Administrator III - BNY Mellon International Operations (India) PVT. LTD
Skill Details
Sql Dba- Exprience - Less than 1 year monthsCompany Details
company - BNY Mellon International Operations (India) PVT. LTD
description - SQL Server :
ï Installation, configuration of database servers using slipstream and setup all the maintenance jobs as per the standard policy on standalone as well as cluster environments with latest service packs
ï Installation of SSRS, uploading of .rdls and assigning correct data sources to reports. Grant necessary access to users & developers on reporting website. Aware of SSIS and designing packages as well.
ï Create and manage logins, users for database applications, assigning permissions as per requests, resolving user login issues.
ï Migration of all SQL server 2005/2008 servers to higher versions.
ï Setup of database refresh jobs on QA, DEV and UAT environments and fixing orphaned users.
ï Troubleshoot performance related issues.
ï Part of multiple projects to work with developers and provide all required support for testing in QA, UAT & DEV environment.
ï Lead the DR tests for database team.
ï Participate in database purge and archive activities.
ï Writing codes for automating database administration tasks.
ï Worked on automating DR tasks to start the agent jobs on multiple servers, restore databases for log shipped databases without manual intervention for online databases post DR activities.
ï Provide support to vendor databases, follow up with the vendor calls and timely escalate to next level when there is no update in predefined timeline.
ï Installation and configuration of smsql on windows server. Schedule jobs for creation and deletion of clones on sql server. Maintain backups using smsql.
MongoDB Server:
ï Installation and configuration of MongoDB server.
ï Creation of databases and collection.
ï Creation new user and grant access using Ops manager.
ï Monitor database servers using Ops manager.
Oracle & Sybase Server
ï Managing and maintaining multiple instances of Databases on Linux and windows servers.
ï Monitoring daily jobs includes backups, refresh and maintenance jobs.
company - Hewlett-Packard India Sales PVT. LTD. On the payroll of Softenger India PVT. LTD
description - ï Installation of SQL Server on standalone as well as windows cluster environments with latest service packs
ï SQL server installation using slipstream.
ï Installation of reporting services
ï Creating logins and users, assigning permissions as per requests.
ï Security audit for all logins includes maintenance of unused and orphan user logins
ï Create & Maintain daily and weekly jobs/maintenance plans includes backup, index rebuild/reorganize , update statistics and database consistency check
ï Create linked servers and ensure connectivity between servers
ï Monitor disk space proactively & Space management using data and log file shrinking
ï Monitor blocking, deadlocks, open transactions and slow running queries during performance issues and highlight costly queries to developers.
ï Configure alerts for deadlock and blocking to maintain performance
ï Implementing high availability technologies like log shipping, AlwaysON, mirroring and its troubleshooting, also have knowledge on replication
ï Successfully completed migration of Databases from one server to another
ï Performing DR drills (Online/Offline) on quarterly basis
ï Power shell scripting to monitor, restart SQL service and get Email alert for the service status.
ï Maintain TNS entries for oracle client as per client requests.
ï Interacting with customers for requirements
ï Contacting customer to update the status of handling issues and service requests at every stage of resolution
ï Managing proper escalation and notification matrix for all support levels |
Database | TECHNICAL SKILLS Operating Systems MS Windows Server 2012/2008/XP Software and Tools MS LiteSpeed, Idera SQL Safe, SSMS, Upgrade Advisor, SQL Server Profiler, SCOM, Diagnostic Manager, Remedy, Jira, Infopacc, Tivoli TDP backup tool, SQL Pack DatabasesMS SQL Server 2016/2014/2012/ 2008 R2/ 2008, Oracle 10g, Netezza Microsoft azure Education Details
Masters of Science Computer Science Pune, Maharashtra Indira College, Pune University
Lead database administrator
Microsoft Certified Professional with 11 years of experience in database administration on MS SQL Server 2016/2014/2012/2008 R2/ 2008
Skill Details
MS SQL SERVER- Exprience - 110 months
Microsoft azure- Exprience - Less than 1 year months
Always on availabiity group- Exprience - Less than 1 year months
Database mirroring- Exprience - Less than 1 year months
Performance tuning- Exprience - Less than 1 year months
Log shipping- Exprience - Less than 1 year months
Installation , upgrade, migration and patching- Exprience - Less than 1 year monthsCompany Details
company - Ensono
description - Employment transfer as a part of project acquisition to Ensono from Wipro.
SQL Server Database Administration
company - Wipro Technologies
description - Microsoft Certified Professional with 11 years of experience in database administration on MS SQL Server 2016/2014/2012/2008 R2/ 2008.
Experience with MS SQL Server 2016/2014/2012/2008 R2/ 2008 installation, upgrade, and administration
Microsoft Azure certified.
Have understanding of Azure VM, Azure Storage, Azure network, Azure AD and Azure SQL database.Â
Incident management, change management and Problem management for SQL Server Database team.
Participating in meetings, conference calls with client, Service Delivery Manager and Application team for System improvements.
Participated in quarterly DR activity.
Involved in creation of SIP - Service Improvement Plans
Involved in handling of high severity issues and provided RCA for the same.
Worked on Always on availability groups, database mirroring, replication, clustering and log shipping.
Have basic understanding of Oracle and Netezza.
Provided on- call support during out of office hours and weekends.
Resource & shift management of 5 SQL DBAs from offshore in multi-client environment for Data center services.
Provided KT to team members, monitor and guide trainees.
company - Wipro Technologies
description - Responsibilities: ⢠MS SQL Server 2016/2014/2012/ 2008 R2/ 2008 installation, configuration, and administration.
⢠Worked on Always on availability groups, log shipping, database mirroring and clustering.
⢠Participated in PCI scan report to perform installation of security hot fixes, service packs for SQL servers to remove vulnerability.
⢠Participated in Holmes BOTS automation implementation of SQL Pack tool.
⢠Worked on service requests, incidents and critical issues.
⢠Involved in conference calls to provide DBA support for critical issues.
⢠Performance tuning.
Environment: SQL Server 2016/2014/2012/2008R2/2008, Windows Server 2012/2008R2/2008
company - Mphasis
description -
company - Mphasis
description - Responsibilities: ⢠MS SQL Server 2012/ 2008 R2/ 2008 installation, configuration, and administration.
⢠Worked on Always on availability groups, log shipping, database mirroring and clustering.
⢠Performed SQL server patching activity ⢠Worked on daily reports like cluster failover, backup, AG/LS/Mirror report and server disk space report.
⢠Worked on service requests, incidents and critical issues.
⢠Participated in quarterly DR activity.
⢠Involved in conference calls to provide DBA support for critical issues.
⢠Provided support to windows team during patching for AG-mirror-cluster failover/failback and database health check.
⢠Performed all the health checks for market open servers and provided update in market open call ⢠Deeply involved in resolution of the issue and finding the root cause analysis of the issue ⢠Performance tuning.
Environment: SQL Server 2012/2008R2/2008, Windows Server 2008R2/2008
company - Synechron Technologies Pvt. Ltd
description - Responsibilities: ⢠SQL server, Oracle and Netezza databases support tasks.
⢠MS SQL Server 2008 R2/ 2008 installation, upgrade, and administration.
⢠Done capacity planning for database growth for all SQL servers.
⢠Troubleshooting alerts.
⢠Worked on log shipping and mirroring.
Environment: SQL Server 2008R2/2008, Windows Server 2008R2/2008, Oracle 10g/RAC
company - Synechron Technologies Pvt. Ltd
description -
company - Synechron Technologies Pvt. Ltd
description - Responsibilities: ⢠Pursued in-depth training on Oracle 11g Architecture and SQL Server.
Environment: SQL Server 2008R2/2008, Windows Server 2008R2/2008, Oracle 10g
company - Synechron Technologies Pvt. Ltd
description - Responsibilities: ⢠Carried out version changes for schemas from PE8 version to EE11 version as per the process given
Environment: Oracle 11g
company - Mastek Ltd
description - Responsibilities: ⢠SQL Server 2008 R2/ 2008 installation, upgrade, and administration ⢠database backup/restore.
⢠Performed MS SQL Server audits ⢠Worked with database mirroring, replication, log shipping and clustering.
⢠Supported UAT and PROD environments ⢠Performed deployment document review.
⢠Carried out deployments for different applications
Environment: SQL Server 2008R2/2008, Windows Server 2008R2/2008
company - Mastek Ltd
description -
company - PP Software and Systems Ltd
description -
company - PP Software and Systems Ltd
description - Description: The system provides Master Data Management and Procurement modules for dairy industry.
Responsibilities: ⢠Designed, coded, and tested ⢠Customized ERP system as per the requirement
Environment: Core Java, PostgreSQL |
Database | SKILLSET Oracle DBA, MySQL, MARIADB, PostgreSQL Database Administration ITSKILLS SQL Oracle 10g, 11g, MYSQL, MariaDB, postgreSQL Windows, Linux Putty Education Details
January 2018 MCS Pune, Maharashtra Pune University
Database administrator
Database administrator - Infiniteworx Omnichannel Pvt. Ltd
Skill Details
DATABASE- Exprience - 17 months
MYSQL- Exprience - 17 months
ORACLE- Exprience - 17 months
SQL- Exprience - 17 months
DATABASE ADMINISTRATION- Exprience - 6 monthsCompany Details
company - Infiniteworx Omnichannel Pvt. Ltd
description - Pune Sept 2017 to Present
RESPONSIBILITIES:
⢠Creating tablespaces and planning the location of data, monitoring the tablespaces growth periodically.
⢠All replication setup
⢠Moved database Schema changes to stage.
⢠Dba support query resolution.
⢠Creating user and giving specific privileges
⢠Database management.
⢠Database recovery, moving data files to different locations.
⢠Planning the backup policies and Backup/ Recovery of databases based on the criticality.
⢠IMPORT/EXPORT.
⢠Degine schemas
Key Result Areas:
⢠Providing 24 /7 support to resolve database performance issues, Job failures, Sessions & diagnose root causes
⢠Installation, configuring and updating Oracle server software and related Oracle products. Installation, configuraing and updating Mysql, Sql server, MariaDB, MongoDB
⢠Supported multiple databases and administered Oracle Databases of Large DB Sizes for production, development & test setups.
⢠Maintaining table spaces & data files, Control files, Online Redo log files
⢠Creating Users, granting Roles & Privileges to users and managing tablespaces for different users by granting quota on Default & Temporary tablespaces.
⢠Taking Oracle RMAN Backups (Scheduling for day wise backup)
⢠Implementing the incremental, cumulative and full RMAN backup for each database to have space management and effective recovery.
⢠Logical Backup Using Export & Import/datapump Export of important tables at regular intervals.
⢠Regular checking of trace, alert log file, all ORA errors
⢠Working on incidents like User creation/deletion incidents, backup failed incidents.
⢠Checking Listener Status, connectivity Troubleshooting and fixing database listener issues.
⢠Look for any new alert / error log entries / errors, error details in Trace files generated. Executing DDL & DML scripts as per customer requirements
⢠Mentoring, coaching and appraising team members with active involvement in the recruitment process
⢠Contributing in Project Documentation and generating daily reports
⢠Ensuring compliance to quality norms and taking steps for any non-conformance Spearheading complete project activities ensuring timely completion of project
⢠Implementing security policies on different database systems with granting and revoking privileges to the users
⢠Following change management processes and participated in related meetings
⢠Verifying all Instances/DB are running, Tablespaces are online, Monitor Backround processes and status.
company - InnovativeTechnologies
description - Clients: BANKING DOMAIN |
Database | Education Details
January 2016 BSc. Mumbai, Maharashtra Mumbai University
January 2013 H.S.C. Maharashtra Board
January 2011 S.S.C. Maharashtra Board
MySQL Database Administrator
2+ Years of experience in MySQL Database Administrator ( MySQL DBA)
Skill Details
MySQL DBA , Centos , Backup , Restore , Replication , Query Optimazation- Exprience - 24 monthsCompany Details
company - Trimax IT Infrastructure & Services Ltd
description - ·       MYSQL Installation, maintenance and Upgrades (Version 5.5 , 5.6)
·       MySQL database administration on a large scale MySQL installation
·       Experience with MySQL on both Linux and Windows
·       MySQL processes, security management and queries optimization.
·       Performed query analysis for slow and problematic queries.
·       Performed Structural changes to Database like creating tables, adding columns according to business requirement
·       Creating and Maintaining Database Maintenance Plans.
·       Writing scripts to Create Jobs for Backup & Restore Plans.
·       Working on MYISAM to INNODB engine.
·       Working on Server shifting , tuning parameter , database purging
·       Working on Mysql master slave Replication
·       Handling Release management and user acceptance.
·       Restore using xtrabackup.
·       Responsibilities include monitoring daily, weekly and monthly system maintenance tasks such as database backup, replication verification, database integrity verification and indexing updates
·       Work in 24/7 production database support.
company - Trimax IT Infrastructure & Services Ltd
description - ·       MYSQL Installation, maintenance and Upgrades (Version 5.5 , 5.6)
·       MySQL database administration on a large scale MySQL installation
·       Experience with MySQL on both Linux and Windows
·       MySQL processes, security management and queries optimization.
·       Performed query analysis for slow and problematic queries.
·       Performed Structural changes to Database like creating tables, adding columns according to business requirement
·       Creating and Maintaining Database Maintenance Plans.
·       Writing scripts to Create Jobs for Backup & Restore Plans.
·       Working on MYISAM to INNODB engine.
·       Working on Server shifting , tuning parameter , database purging
·       Working on Mysql master slave Replication
·       Handling Release management and user acceptance.
·       Restore using xtrabackup.
·       Responsibilities include monitoring daily, weekly and monthly system maintenance tasks such as database backup, replication verification, database integrity verification and indexing updates
·       Work in 24/7 production database support. |
Database | TECHNICAL SKILL: Operating System LINUX, Windows Server 2012 R2, Windows 98, Windows 2000/ XP Tools & Utility Packages SQL* Loader, SQL*PLUS, OEM, Datapump, expdp/impdp, PLSQL Developer, Jenkins Database Oracle 10g, Oracle 11g, Oracle 12c Scripting UNIX Shell Scripting Language SQL Education Details
January 2011 M.B.A. Amravati, Maharashtra Amravati University
January 2007 B.C.A. Nagpur, Maharashtra Nagpur University
Oracle Database Administrator
ORACLE DATABASE ADMINISTRATOR ON LINUX/MICROSOFT WITH 4 YEARS EXPERIENCE.
Skill Details
ORACLE- Exprience - 48 months
LINUX- Exprience - 6 months
ORACLE DBA- Exprience - Less than 1 year months
RAC- Exprience - Less than 1 year months
GOLDEN GATE- Exprience - Less than 1 year months
ASM- Exprience - Less than 1 year months
DATAGUARD- Exprience - Less than 1 year monthsCompany Details
company - TIETO INDIA PVT. LTD
description - Pune From February 2015 till present
Project Profile:
Oil and Gas unit of Tieto India Pvt. Ltd. is working for Environmental Components (EC) application. Tieto is the authorized service provider in EC. Energy Components is a complete end-to-end hydrocarbon accounting solution following the hydrocarbons from production to transport, sales and revenue recognition. Globally market-leading hydrocarbon accounting software with functionality coverage exceeding other available solutions. Modern, flexible and scalable technology platform. Selected as the global standard and best practice by oil & gas super majors.
Responsibilities: ⢠Oracle Database Administration 11g R2, 12c and 18c ⢠Supporting databases in 24x7 environments and coordinate with Application, OS, Storage and Development Teams. Test and Production environments ⢠Regularly monitoring the trace files and Alert log files for database related issues.
⢠Experience in monitoring the CPU usage, IO and memory utilization at OS level.
⢠Checking the Alert log file to analyze the ORA errors if any to raise SR with Oracle.
⢠Monitoring the log files, backups, database space usage and the use of system resources.
⢠Configuring Backup (RMAN) for database and restoring database.
⢠Installation, configuring and updating Oracle server software and related Oracle products of 11g and 12C.
⢠Oracle Server installation, client installation and configuration, PLSQL developer installation.
⢠Creating database using DBCA and manually.
⢠Creating of Oracle user and granting proper privileges to user as per request.
⢠Creating AWR, ASH and ADDM reports for database performance analysis.
⢠Handling space management and performance issues in Oracle databases.
⢠Creating remote database link.
⢠Renaming and resizing of data files in Oracle database if needed.
⢠Tablespace shrinking with regular time interval to reclaim server space.
⢠Expertise in Export and Import using data pump in Oracle database.
⢠Expertise in Configuration of Listener and Tnsnames through NETMGR and NETCA and statically also.
⢠Managing Oracle Listener and Oracle Network Files.
⢠Creating user Profiles, granting specific privileges and roles to the users in Oracle database.
⢠Maintaining tablespaces & data files, Control files, Online Redo log files in Oracle database.
⢠Worked on AWS cloud services like EC2, S3, RDS, ELB, EBS, VPC, Route53, Auto, Cloud watch, Cloud Front, IAM for installing configuring and troubleshooting on various Amazon images for server migration from physical into cloud. |
Database | Technical Skills Databases: Oracle RDBMS- 10g, 11g & 12c Technology/utilities: Data Pump, RMAN, Data guard, ASM, RAC, Golden Gate Tools: OCC, PUTTY, SQLPLUS, SQL Developer, Netbackup, SCOM, SCCM, VMWare Vsphere Operating Systems: RHEL 6.0, RHEL 6.5, UNIX and Microsoft WindowsEducation Details
Database Administrator
Database Administrator - BNY Mellon
Skill Details
DATABASES- Exprience - 24 months
ORACLE- Exprience - 24 months
RMAN- Exprience - 24 months
NETBACKUP- Exprience - 24 months
SCOM- Exprience - 24 monthsCompany Details
company - BNY Mellon
description - Databases: 600+
Team Size: 8
Duration: Jan 2017 - Till Date
Clients: Over 130+ investment banking organizations who are hosted with Eagle
Responsibilities: Database Management (Support and managing critical production, Pre-production, test and reporting databases in different platforms), Capacity Management Upgrades.
⢠Handling day to day database activities monitoring and incident management.
⢠Building new databases as per the requirement and prepare them for go live with the help of multiple teams.
⢠Working on scheduled activity of database patching (CPU, PSU) ⢠Installing latest path on production, Dev and Test databases as per the suggestion from Oracle support.
⢠Database Upgrade from 11g and to 12c.
⢠Adding disks to ASM disk groups.
⢠Building DR database using Active Data guard, Make it sync with prod and resolving issues if persists ⢠Data Guard Management- Checking lagging status, removing lagging of archives, checking processes like RFS/MRP, Archives Management ⢠Working on tablespace related issues ⢠Managing user access and profiles ⢠Importing and exporting using datapump ⢠Maintaining inventory of all databases in the single centralize database ⢠Refreshing test environment from production database.
⢠Working with Oracle Support to resolve oracle errors.
⢠Schedule daily and weekly databases backup using RMAN, Troubleshooting in RMAN issues.
⢠Database cloning using RMAN.
⢠Take part in cutover to upgrade application to higher version.
⢠Strictly following ITIL process in incident management and change management.
⢠Providing weekly report of issues in team meeting also participating and suggesting service improvement plans.
⢠Database Migrations from one server to another or to different platforms ⢠RCA and impact analysis reporting during any production outage.
Previous Organization: Brose India
Project I: Central IT Management
company -
description - Responsibilities: Managing our internal databases and servers of Brose global.
⢠Providing 24x7 on-call support in the rotational shifts.
⢠Performing day-to-day database activity ⢠Monitoring and responding DBA group Mails for all alerts, issues and ad-hoc business user requests, etc.
⢠Database creation, patching ⢠Backup of Database in frequent cycles using Data pump/RMAN.
⢠Database refreshes using RMAN, Datapump.
⢠Recovery using copy of data / RMAN ⢠Monitoring logs and trace for resolving issues.
⢠Creating new VM servers and prepare it for go live, Also decommissioning as per requirements.
⢠Weekly patching of windows servers using SCCM and taking actions for patching if needed ⢠Monitoring and troubleshooting of daily and weekly OS backup using Symantec Netbackup ⢠Managing user accounts of OS users and database users ⢠Monitoring OS level alerts using SCOM
Project II: Data Center Migration (Onsite Project)
Responsibilities: Data center migration was one of activity for migration of our datacenter from one location to another. Where our all servers and databases were moved successfully.
⢠Installation of Oracle 11g on Linux platforms ⢠Worked on service requests (Incidents / Change / Request) ⢠Creation of users, managing user privileges ⢠Configured RMAN backup for databases ⢠Patching of databases ⢠Configuring physical standby database using Dataguard ⢠Cloning of servers and migrate to another cluster
ACADEMIA / PERSONAL DETAILS ⢠Bachelor of Engineering (B.E.) in Computer Science and Engineering From SGBAU Amravati University, Amravati in 2014 with CGPA of 7.21
Current Address:- Mr. Yogesh Tikhat, C/O: Raj Ahmad, Flat# G2-702, Dreams Aakruti, Kalepadal, Hadapsar, Pune - 411028
Highest Qualification BE (cse)
PAN: - AOFPT5052C |
Database | Software Skills: * RDBMS: MS SQL SERVER 2000/2005/2008 & 2012, 2014 * Operating Systems: WINDOWS XP/7, WINDOWS SERVER 2008, 12 * Fundamentals: MS Office 03/07 * Tools: SSMS, Performance Monitor, Sql profiler, SQL lite speed. Company name: Barclays Technology Centre India. Team Size: 24 Role: Database Administrator Support Description: Barclays Technology is a UK based retail & invest bank and 300 years of old bank.. It has operations in over 40 countries and employs approximately 120, 000 people. Barclays is organised into four core businesses: Personal & Corporate (Personal Banking, Corporate Banking, Wealth & Investment Management), Barclaycard, Investment Banking. Responsibilities: â Attending various calls from all over the world on various database issues. â Working on Web Gui alerts and resolving incident tickets within the time lines. â Troubleshoooting log shipping issues and fixing the related alerts. â Identifying and Resolving Blocking and locking related issues. â Configuration and monitoring Replication, Log shipping and mirroring setup. â Working on replication issues and Always ON issue. â Granting and revoking permissions on various account provisioning tasks. â Working on call support during the weekend and performing DR test's. and working on weekly maintenance jobs and weekend change requests. Education Details
B.Sc. Maths Kakatiya University Board secured
SQL server database administrator
Database administrator
Skill Details
DATABASE- Exprience - 120 months
DATABASE ADMINISTRATOR- Exprience - 72 months
MAINTENANCE- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
REPLICATION- Exprience - 48 monthsCompany Details
company - Barclays global services centre
description - SQL server databases implementation and maintenances
Log shipping, replication, High availability, clustering, performance tuning, database mirroring, Installation, configuration, upgradation, migration
company - Wipro Infotech Pvt Ltd
description - SQL server database administrator
company - CITI Bank
description - Worked as Database Support at Accord Fintech, Sanpada from Sep 2008 to 2013 Feb.
company -
description - 2012.
⢠Sound knowledge in Database Backup, Restore, Attach, and Detach and Disaster Recovery procedures.
⢠Developed backup and recovery strategies for production environment.
⢠Ensuring data consistency in the database through DBCC and DMV commands.
⢠Experience in query tuning and stored procedures and troubleshooting performance issues.
⢠Having hands on experience in DR process including log shipping and database mirroring.
⢠Experience in scheduling monitoring of Jobs.
⢠Experience in configure and troubleshooting in Always ON.
⢠Creating and Maintaining of Maintenance Plan.
⢠Expertise in planning and implementing MS SQL Server Security and Database permissions.
⢠Clear understanding of Implementation of Log Shipping, Replication and mirroring of databases between the servers.
⢠Performance Tuning (Performance Monitor, SQL Profiler Query Analyzer) ⢠Security for server & Databases Implementing security by creating roles/users,
Added users in roles, assigning rights to roles.
⢠Create and maintaining the linked servers between sql Instances.
⢠Create and maintaining and Database mail.
⢠Monitor and troubleshoot database issues.
⢠Creating DTS packages for executing the required tasks.
⢠Experts in create indexes, Maintaining indexes and rebuilds and reorganizes.
⢠Daily Maintenance of SQL Servers included reviewing
SQL error logs, Event Viewer. |
Database | Areas of Expertise ⢠Oracle Databases 12c, 11g, 10g ⢠Weblogic 12c, 11g ⢠Grid Infrastructure ⢠RMAN ⢠ASM ⢠Middleware: OIM, OAM, SOA ⢠Shell Scripts ⢠DataGuard ⢠Web servers - OHS, Apache ⢠Architecture Designs ⢠Proof of Concepts ⢠DevOpsEducation Details
January 2007 Bachelor of Engineering Information Technology Sangli, Maharashtra Walchand College
January 2004 Diploma Computer Engineering Jalgaon, Maharashtra Govt. Polytechnic
Lead Database Administrator
Lead Database Administrator - Tieto Software
Skill Details
DATABASES- Exprience - 108 months
MIDDLEWARE- Exprience - 96 months
RMAN- Exprience - 84 months
SHELL SCRIPTS- Exprience - 48 monthsCompany Details
company - Tieto Software
description - As a part of AO (Application Operations) team, scope in project is quite wide than typical database administration. Range of accomplishments are extended right from Data Tier to Middle Tier & Application Tier:
- Maximized availability of applications from 99.3% to 99.8%
- Raised business by presenting Proof of Concepts for 10+ cases
- Delivered upgrades of various applications time to time to keep it on supported platform
- Saved SLAs as per contract by means of handling P1, P2 issues effectively
- Produced Capacity reports comprising all layers (Data, Middleware, Web) of various applications
- Generated Work Orders as per customer need
company - Tieto Software
description - - Designed databases of various applications
- Planned RMAN backup and recovery, BCP strategies
- Executed Business Continuity Testing for various applications
- Introduced Zero Cost high availability solutions - Active-Passive Failover
- Optimized performance by means of scripting automation
- Established cloning procedures for all layers of various applications
- Delivered Infrastructure changes, like FW Openings & LoadBalancer configuration for new applications
- Eliminated downtime by troubleshoot issues for Middleware products - OIM, OAM, SOA
- Contributed to build & maintain Integration Layer- SMTP, ftp, Reverse Proxy, OCM
company - Tieto Software
description - - Provided database support to environments - PROD, UAT, TEST, DEV
- Performed Database Refresh/Cloning from production to development and support databases
- Reduced risk level by means of upgrading & patching databases time to time
- Protected databases by assigning appropriate roles and privileges as per SOD
- Generated & maintained Middleware schemas using RCU
- Exceeded scope of work by supporting & maintaining WebLogic platform - installation, patching, troubleshooting issues
- Expanded duty scope to web servers: Install & maintain- OHS, apache, tomcat
company - HSBC Software
description - Being part of project supporting HSBC Bank France, I achieved:
- Handled incidents & service requests as Day to day database administration tasks
- Delivered basic implementation services - Database installation, patching, upgrades
- Performed capacity planning - managing tablespaces, compressions
- Contributed in maintaining quality of databases - managing instances, indexes, re-organization, performance monitoring & tuning using AWR, ADDM reports
- Maintained backups & recovery of database - logical backups (exp/imp), datapump (expdp/impdp), cold backups, hot backups, RMAN backup/restore, RMAN Duplication
- Reduced efforts by automation - Value add initiatives which includes writing shell scripts for automated housekeeping operations, scheduling backups, use crontab/at to schedule tasks
- Implemented high availability solutions - Dataguard |
Database | Education Details
May 2011 to May 2014 Bachelor of science Information technology Mumbai, Maharashtra Mumbai university
Oracle DBA
Oracle database administrator
Skill Details
Installation of Oracle on RH Linux & Windows. Creating/Managing user profiles and analyzing their privileges and tablespace quotas Backup of database Logical and Physical procedures. Recovery of database in case of database crash, disk/media failure, etc. Standard DBA functions like space management, Rollback segments, Extents. Database Management and Monitoring the database. Willing to learn new things. Being a constructive team member, contributing practically to the success of the team.- Exprience - 48 monthsCompany Details
company - Accelya kale solutions ltd
description - Database Administrator working in 24*7 support environment maintaining Databases running on Oracle 11g, 12c.
Database Up-gradation from Oracle 11g to Oracle 12c.
Installation of Database critical patches.
Taking cold and hot backups on scheduled times and monitoring backups.
Importing the export dump to another database as per demands.
Automating most of the daily activities through cronjobs, shell scripts or schedulers.
Making Plan of Actions for Various Activities.
Raising SR with Oracle Support for different severity issues.
Handling the Userâs request and proper client interaction.
Monitoring & managing database growth, tablespaces, adding ,resizing and renaming the datafiles.
Restoration of database using RMAN backups for backup consistency checks.
Migration of Database using export / import and RMAN backups.
Configuring & managing Physical Standby database.
Creating database links, Tablespaces, database directories.
Managing network settings through listener.ora and tnsnames.ora files.
Restoration of data using old logical backup as per client request.
Schema replication across databases through data pump tool.
Taking cold and hot backups on scheduled times and monitoring backups
Taking EXPDP of database, database objects and a particular schema
Using SCP ticketing tool in order keeping track of client requests.Â
Performing Maintenance Activities such as Index Rebuilding and stats gather.
Troubleshooting the Basic Level performance issuesÂ
Setting up a new environment from database perspective within the requested timelines
Adding/Deleting disks in ASM and monitoring the ASM diskgroups.
Creating users & privileges with appropriate roles and levels of security.Â
Database Administrator working in 24*7 support environment maintaining Databases running on Oracle 11g, 12c.
Performing database online and offline database re-organization for database enhancement.Â
Migrating database from Non-ASM to ASM file system.
Grid up-gradation from 11g to 12C.
company - Insolutions Global Ltd
description - Oracle software installation(graphical/silent),Database upgrade,Patch upgrade.
Maintaining around 80+ UAT DB servers, 40 production DB and 28 standby/DR DB.
Managing/creating DR & standby servers, DB sync.
Backup and recovery (RMAN/ Datapump).
Performing activities like switchover and failover .
Allocating system storage and planning future storage requirements for the database system
Enrolling users and maintaining system security.
Monitoring Alert log, Snap ID generation, db size, Server space, OEM reports, User validity.
Controlling and monitoring user access to the database .
Scheduling shell scripts or dbms_jobs using Crontab or DBMS_SCHEDULER (monitoring script, listener check, backup script, AWR reports) etc.
Planning for backup and recovery of database.
Managing the production database for Oracle and SQL Server and resize the space of database/Datafiles/Tablespace/Transactional Logs.
Managing Temp and Undo tablespaces.
Creating primary database storage structures (tablespaces) after application developers have designed an application. |
Database | TECHNICAL SKILLS ⢠SQL ⢠Oracle v10, v11, v12 ⢠R programming, Python, linear regression, machine learning and statistical modelling techniques(obtained certification through Edvancer Eduventures training institute) KEY SKILLS ⢠Multitasking, working to meet client SLA in high pressure scenarios, handling sensitive clients along with improved skills at being a team player. ⢠Excellent communication skills and quick learner. ⢠Leadership qualities, team networking and courage to take up the problems proactively.Education Details
June 2012 Sadvidya Pre-University College
Application Database Administrator-DBMS (Oracle)
Application Database Administrator-DBMS (Oracle) - IBM India Pvt Ltd
Skill Details
CLIENTS- Exprience - 30 months
MACHINE LEARNING- Exprience - 30 months
ORACLE- Exprience - 30 months
SQL- Exprience - 30 months
EXCELLENT COMMUNICATION SKILLS- Exprience - 6 monthsCompany Details
company - IBM India Pvt Ltd
description - Client: Blue Cross Blue Shield MA: Massachusetts Health Insurance
⢠Used Oracle SQL to store and organize data. This includes capacity planning, installation, configuration, database
design, migration, security, troubleshooting, backup and data recovery.
⢠Worked with client databases installed on Oracle v10, v11, v12 on a Linux platform.
⢠Proficient communication with clients across locations facilitating data elicitation.
⢠Handling numerous business requests and solving them diligently within the given time frame and responding quickly and effectively to production issues within SLA.
⢠Leading a team in co ordination with business to conduct weekly checkouts of the database servers and systems
IBM Certifications
Statistics 101, Applied Data Science with R, Big Data Foundations, Data Science Foundations
Business Analytics Certification (Pune)
Worked on Retail and Banking projects, to design a predictive business model using machine learning techniques in
R programming for an efficient business and marketing strategy. |
Database | TECHNICAL EXPERTISE ⢠DB Languages: SQL ⢠Database Tools: SQL Server 2014/ 2017 Postgresql 9.5, 9.6, Oracle 11gR2 ⢠Operating Systems: Redhat Linux, Oracle Linux, Windows Server 2012/ 2016 OTHER TECHNICAL SKILLS ORACLE 11G R2 ⢠Proficient in Oracle Database Software Installation, Creation of Database using GUI/Silent DBCA, Architecture, File management, Space Management, User Management, Creating Roles and assigning Privileges/Roles in 11gR2 and troubleshooting them. ⢠Hands on experience Control files/Redolog/Archive/Undo Management ⢠Configuring Listener.ora/Tnsnames.ora file using Netmgr/netca ⢠Generating AWR reports, ADDM, ASH reports to diagnose the problems ⢠Database Backup, Cloning/Duplicate using hot & cold backups using RMAN. ⢠Knowledge in Flashback Technologies & Expdp/Impdp ⢠Implemented Oracle11gR2 RAC on Oracle Linux Platform and knowledge of services for troubleshooting RAC (CRSCTL, SRVCTL) ⢠Knowledge on installation and configuration of RAC. Add/Remove Nodes on RAC ⢠Configuration of physical standby database (Data guard) ⢠Successfully upgraded from 11.2.0.1 to 11.2.0.4 & PSU patching using O patch. STRENGTHS ⢠Good Communication skills. ⢠Self-confident and can adapt myself to all work environments. ⢠Enjoy responsibilities as lead and team player. ⢠Patient listener & quick learner. ⢠Capable of explaining issues & solving them.Education Details
B.E Computer Engineering Mumbai, Maharashtra Mumbai University
Higher Secondary Certificate Dr. DY Patil Jr College
Database Administrator
Database Administrator - DBA in Marketplace Technologies Ltd
Skill Details
DATABASE- Exprience - 61 months
BACKUPS- Exprience - 48 months
LINUX- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
SQL- Exprience - 48 monthsCompany Details
company - DBA in Marketplace Technologies Ltd
description - Project Title: EBoss, Datafeed, MFDB, RTRMS, IndiaINX
company - Standard & Enterprise
description - Redhat Linux 7.4, Postgresql 9.5, 9.6
Duration: Feb 2017 - till date
Description: Bombay Stock Exchange BSE is Asia's first & the Fastest Stock Exchange in world with the speed of 6 micro seconds and one of India's leading exchange groups provides an efficient and transparent market for trading in equity, currencies, debt instruments, derivatives, mutual funds. BSE SME is India's largest SME platform which has listed over 250 companies and continues to grow at a steady pace.
JOB ROLES & RESPONSIBILITIES
POSTGRESQL - ⢠Worked on Redhat Linux OS Cluster with Postgresql for High Availability (HA) using Pacemaker.
⢠Coordinated with Developers/Linux teams for database knowledge and support.
⢠Participated in implementation of new releases into production.
⢠Installed /Configured Postgresql from source or packages on Redhat Linux servers.
⢠Performed Postgresql Server Management tasks i.e. Backup & Restore, Configuration, Roles, Blockings, Tablespace creation and Troubleshooting.
⢠Worked with Storage team for Disaster Recovery DR setup built on SAN using EMC technology ⢠Configured LDAP authentication & GSSAPI Authentication from Windows to Linux for Postgresql.
⢠Configured logical replication for Database servers, hot standby Postgresql servers, faster database backup methods, schema and tablespace backups.
⢠Configured maximum connections to database on Linux servers.
⢠Installed tds_fdw from source for linked servers to connect to heterogeneous databases & other required extensions, backup configuration, PITR using base backups.
MSSQL - ⢠Day-to-day administration of live SQL Servers.
⢠Participated in Live Primary Recovery PR & Disaster Recovery DR activities.
⢠Participated in PR & DR mocks for new releases into production.
⢠Configured Linked Servers, Transactional replication, Maintenance tasks like database backup & restore, recovery, scheduled jobs, maintenance plans.
⢠Installed & Configured SQL server 2014, 2017 standalone and SQL Cluster servers.
⢠Maintained the security of the database by providing appropriate SQL roles, logins and permissions to the users on demand.
⢠Worked with teams on application rollouts, application issues and SQL server migrations.
⢠Exposure in handling production system with skills and understand client's requirement.
⢠Performed SQL Server service pack upgrades and hot fixes.
⢠Handled multiple SQL Instances on Windows SQL Cluster environment built on EMC SAN.
⢠Worked on MSSQL DB clusters with active/active & active passive servers, Always-On Availability Groups (AAG) and HA/DR Setup.
⢠Have experience on SAN and RAID levels and building and supporting SQL Cluster servers on SAN Environments.
company - BSE Bombay Stock Exchange
description - Environment: Windows server 2008 R2, 2012 R2, 2016 Enterprise & Standard, |
Database | Technical Expertise Operating Systems Microsoft Window Server 2003/2008/2008 R2/2012 Database Technologies SQL Server, Sybase ASE Server, Oracle, MongoDB Monitoring and Ticketing Tools HP Service Manager 7.0/9.0, Solar winds DPA, JIRA and MongoDB OPS manager Web Server IIS 7.0 Database Tools SSMS, DBArtisan, Studio 3T, SnapShot Manager for SQL ServerEducation Details
B. Tech Computer Science Gulbarga, Karnataka PDACOE, Gulbarga, Autonomous Institution
Database Administrator II
Database Administrator III - BNY Mellon International Operations (India) PVT. LTD
Skill Details
Sql Dba- Exprience - Less than 1 year monthsCompany Details
company - BNY Mellon International Operations (India) PVT. LTD
description - SQL Server :
ï Installation, configuration of database servers using slipstream and setup all the maintenance jobs as per the standard policy on standalone as well as cluster environments with latest service packs
ï Installation of SSRS, uploading of .rdls and assigning correct data sources to reports. Grant necessary access to users & developers on reporting website. Aware of SSIS and designing packages as well.
ï Create and manage logins, users for database applications, assigning permissions as per requests, resolving user login issues.
ï Migration of all SQL server 2005/2008 servers to higher versions.
ï Setup of database refresh jobs on QA, DEV and UAT environments and fixing orphaned users.
ï Troubleshoot performance related issues.
ï Part of multiple projects to work with developers and provide all required support for testing in QA, UAT & DEV environment.
ï Lead the DR tests for database team.
ï Participate in database purge and archive activities.
ï Writing codes for automating database administration tasks.
ï Worked on automating DR tasks to start the agent jobs on multiple servers, restore databases for log shipped databases without manual intervention for online databases post DR activities.
ï Provide support to vendor databases, follow up with the vendor calls and timely escalate to next level when there is no update in predefined timeline.
ï Installation and configuration of smsql on windows server. Schedule jobs for creation and deletion of clones on sql server. Maintain backups using smsql.
MongoDB Server:
ï Installation and configuration of MongoDB server.
ï Creation of databases and collection.
ï Creation new user and grant access using Ops manager.
ï Monitor database servers using Ops manager.
Oracle & Sybase Server
ï Managing and maintaining multiple instances of Databases on Linux and windows servers.
ï Monitoring daily jobs includes backups, refresh and maintenance jobs.
company - Hewlett-Packard India Sales PVT. LTD. On the payroll of Softenger India PVT. LTD
description - ï Installation of SQL Server on standalone as well as windows cluster environments with latest service packs
ï SQL server installation using slipstream.
ï Installation of reporting services
ï Creating logins and users, assigning permissions as per requests.
ï Security audit for all logins includes maintenance of unused and orphan user logins
ï Create & Maintain daily and weekly jobs/maintenance plans includes backup, index rebuild/reorganize , update statistics and database consistency check
ï Create linked servers and ensure connectivity between servers
ï Monitor disk space proactively & Space management using data and log file shrinking
ï Monitor blocking, deadlocks, open transactions and slow running queries during performance issues and highlight costly queries to developers.
ï Configure alerts for deadlock and blocking to maintain performance
ï Implementing high availability technologies like log shipping, AlwaysON, mirroring and its troubleshooting, also have knowledge on replication
ï Successfully completed migration of Databases from one server to another
ï Performing DR drills (Online/Offline) on quarterly basis
ï Power shell scripting to monitor, restart SQL service and get Email alert for the service status.
ï Maintain TNS entries for oracle client as per client requests.
ï Interacting with customers for requirements
ï Contacting customer to update the status of handling issues and service requests at every stage of resolution
ï Managing proper escalation and notification matrix for all support levels |
Database | TECHNICAL SKILLS Operating Systems MS Windows Server 2012/2008/XP Software and Tools MS LiteSpeed, Idera SQL Safe, SSMS, Upgrade Advisor, SQL Server Profiler, SCOM, Diagnostic Manager, Remedy, Jira, Infopacc, Tivoli TDP backup tool, SQL Pack DatabasesMS SQL Server 2016/2014/2012/ 2008 R2/ 2008, Oracle 10g, Netezza Microsoft azure Education Details
Masters of Science Computer Science Pune, Maharashtra Indira College, Pune University
Lead database administrator
Microsoft Certified Professional with 11 years of experience in database administration on MS SQL Server 2016/2014/2012/2008 R2/ 2008
Skill Details
MS SQL SERVER- Exprience - 110 months
Microsoft azure- Exprience - Less than 1 year months
Always on availabiity group- Exprience - Less than 1 year months
Database mirroring- Exprience - Less than 1 year months
Performance tuning- Exprience - Less than 1 year months
Log shipping- Exprience - Less than 1 year months
Installation , upgrade, migration and patching- Exprience - Less than 1 year monthsCompany Details
company - Ensono
description - Employment transfer as a part of project acquisition to Ensono from Wipro.
SQL Server Database Administration
company - Wipro Technologies
description - Microsoft Certified Professional with 11 years of experience in database administration on MS SQL Server 2016/2014/2012/2008 R2/ 2008.
Experience with MS SQL Server 2016/2014/2012/2008 R2/ 2008 installation, upgrade, and administration
Microsoft Azure certified.
Have understanding of Azure VM, Azure Storage, Azure network, Azure AD and Azure SQL database.Â
Incident management, change management and Problem management for SQL Server Database team.
Participating in meetings, conference calls with client, Service Delivery Manager and Application team for System improvements.
Participated in quarterly DR activity.
Involved in creation of SIP - Service Improvement Plans
Involved in handling of high severity issues and provided RCA for the same.
Worked on Always on availability groups, database mirroring, replication, clustering and log shipping.
Have basic understanding of Oracle and Netezza.
Provided on- call support during out of office hours and weekends.
Resource & shift management of 5 SQL DBAs from offshore in multi-client environment for Data center services.
Provided KT to team members, monitor and guide trainees.
company - Wipro Technologies
description - Responsibilities: ⢠MS SQL Server 2016/2014/2012/ 2008 R2/ 2008 installation, configuration, and administration.
⢠Worked on Always on availability groups, log shipping, database mirroring and clustering.
⢠Participated in PCI scan report to perform installation of security hot fixes, service packs for SQL servers to remove vulnerability.
⢠Participated in Holmes BOTS automation implementation of SQL Pack tool.
⢠Worked on service requests, incidents and critical issues.
⢠Involved in conference calls to provide DBA support for critical issues.
⢠Performance tuning.
Environment: SQL Server 2016/2014/2012/2008R2/2008, Windows Server 2012/2008R2/2008
company - Mphasis
description -
company - Mphasis
description - Responsibilities: ⢠MS SQL Server 2012/ 2008 R2/ 2008 installation, configuration, and administration.
⢠Worked on Always on availability groups, log shipping, database mirroring and clustering.
⢠Performed SQL server patching activity ⢠Worked on daily reports like cluster failover, backup, AG/LS/Mirror report and server disk space report.
⢠Worked on service requests, incidents and critical issues.
⢠Participated in quarterly DR activity.
⢠Involved in conference calls to provide DBA support for critical issues.
⢠Provided support to windows team during patching for AG-mirror-cluster failover/failback and database health check.
⢠Performed all the health checks for market open servers and provided update in market open call ⢠Deeply involved in resolution of the issue and finding the root cause analysis of the issue ⢠Performance tuning.
Environment: SQL Server 2012/2008R2/2008, Windows Server 2008R2/2008
company - Synechron Technologies Pvt. Ltd
description - Responsibilities: ⢠SQL server, Oracle and Netezza databases support tasks.
⢠MS SQL Server 2008 R2/ 2008 installation, upgrade, and administration.
⢠Done capacity planning for database growth for all SQL servers.
⢠Troubleshooting alerts.
⢠Worked on log shipping and mirroring.
Environment: SQL Server 2008R2/2008, Windows Server 2008R2/2008, Oracle 10g/RAC
company - Synechron Technologies Pvt. Ltd
description -
company - Synechron Technologies Pvt. Ltd
description - Responsibilities: ⢠Pursued in-depth training on Oracle 11g Architecture and SQL Server.
Environment: SQL Server 2008R2/2008, Windows Server 2008R2/2008, Oracle 10g
company - Synechron Technologies Pvt. Ltd
description - Responsibilities: ⢠Carried out version changes for schemas from PE8 version to EE11 version as per the process given
Environment: Oracle 11g
company - Mastek Ltd
description - Responsibilities: ⢠SQL Server 2008 R2/ 2008 installation, upgrade, and administration ⢠database backup/restore.
⢠Performed MS SQL Server audits ⢠Worked with database mirroring, replication, log shipping and clustering.
⢠Supported UAT and PROD environments ⢠Performed deployment document review.
⢠Carried out deployments for different applications
Environment: SQL Server 2008R2/2008, Windows Server 2008R2/2008
company - Mastek Ltd
description -
company - PP Software and Systems Ltd
description -
company - PP Software and Systems Ltd
description - Description: The system provides Master Data Management and Procurement modules for dairy industry.
Responsibilities: ⢠Designed, coded, and tested ⢠Customized ERP system as per the requirement
Environment: Core Java, PostgreSQL |
Database | SKILLSET Oracle DBA, MySQL, MARIADB, PostgreSQL Database Administration ITSKILLS SQL Oracle 10g, 11g, MYSQL, MariaDB, postgreSQL Windows, Linux Putty Education Details
January 2018 MCS Pune, Maharashtra Pune University
Database administrator
Database administrator - Infiniteworx Omnichannel Pvt. Ltd
Skill Details
DATABASE- Exprience - 17 months
MYSQL- Exprience - 17 months
ORACLE- Exprience - 17 months
SQL- Exprience - 17 months
DATABASE ADMINISTRATION- Exprience - 6 monthsCompany Details
company - Infiniteworx Omnichannel Pvt. Ltd
description - Pune Sept 2017 to Present
RESPONSIBILITIES:
⢠Creating tablespaces and planning the location of data, monitoring the tablespaces growth periodically.
⢠All replication setup
⢠Moved database Schema changes to stage.
⢠Dba support query resolution.
⢠Creating user and giving specific privileges
⢠Database management.
⢠Database recovery, moving data files to different locations.
⢠Planning the backup policies and Backup/ Recovery of databases based on the criticality.
⢠IMPORT/EXPORT.
⢠Degine schemas
Key Result Areas:
⢠Providing 24 /7 support to resolve database performance issues, Job failures, Sessions & diagnose root causes
⢠Installation, configuring and updating Oracle server software and related Oracle products. Installation, configuraing and updating Mysql, Sql server, MariaDB, MongoDB
⢠Supported multiple databases and administered Oracle Databases of Large DB Sizes for production, development & test setups.
⢠Maintaining table spaces & data files, Control files, Online Redo log files
⢠Creating Users, granting Roles & Privileges to users and managing tablespaces for different users by granting quota on Default & Temporary tablespaces.
⢠Taking Oracle RMAN Backups (Scheduling for day wise backup)
⢠Implementing the incremental, cumulative and full RMAN backup for each database to have space management and effective recovery.
⢠Logical Backup Using Export & Import/datapump Export of important tables at regular intervals.
⢠Regular checking of trace, alert log file, all ORA errors
⢠Working on incidents like User creation/deletion incidents, backup failed incidents.
⢠Checking Listener Status, connectivity Troubleshooting and fixing database listener issues.
⢠Look for any new alert / error log entries / errors, error details in Trace files generated. Executing DDL & DML scripts as per customer requirements
⢠Mentoring, coaching and appraising team members with active involvement in the recruitment process
⢠Contributing in Project Documentation and generating daily reports
⢠Ensuring compliance to quality norms and taking steps for any non-conformance Spearheading complete project activities ensuring timely completion of project
⢠Implementing security policies on different database systems with granting and revoking privileges to the users
⢠Following change management processes and participated in related meetings
⢠Verifying all Instances/DB are running, Tablespaces are online, Monitor Backround processes and status.
company - InnovativeTechnologies
description - Clients: BANKING DOMAIN |
Database | Education Details
January 2016 BSc. Mumbai, Maharashtra Mumbai University
January 2013 H.S.C. Maharashtra Board
January 2011 S.S.C. Maharashtra Board
MySQL Database Administrator
2+ Years of experience in MySQL Database Administrator ( MySQL DBA)
Skill Details
MySQL DBA , Centos , Backup , Restore , Replication , Query Optimazation- Exprience - 24 monthsCompany Details
company - Trimax IT Infrastructure & Services Ltd
description - ·       MYSQL Installation, maintenance and Upgrades (Version 5.5 , 5.6)
·       MySQL database administration on a large scale MySQL installation
·       Experience with MySQL on both Linux and Windows
·       MySQL processes, security management and queries optimization.
·       Performed query analysis for slow and problematic queries.
·       Performed Structural changes to Database like creating tables, adding columns according to business requirement
·       Creating and Maintaining Database Maintenance Plans.
·       Writing scripts to Create Jobs for Backup & Restore Plans.
·       Working on MYISAM to INNODB engine.
·       Working on Server shifting , tuning parameter , database purging
·       Working on Mysql master slave Replication
·       Handling Release management and user acceptance.
·       Restore using xtrabackup.
·       Responsibilities include monitoring daily, weekly and monthly system maintenance tasks such as database backup, replication verification, database integrity verification and indexing updates
·       Work in 24/7 production database support.
company - Trimax IT Infrastructure & Services Ltd
description - ·       MYSQL Installation, maintenance and Upgrades (Version 5.5 , 5.6)
·       MySQL database administration on a large scale MySQL installation
·       Experience with MySQL on both Linux and Windows
·       MySQL processes, security management and queries optimization.
·       Performed query analysis for slow and problematic queries.
·       Performed Structural changes to Database like creating tables, adding columns according to business requirement
·       Creating and Maintaining Database Maintenance Plans.
·       Writing scripts to Create Jobs for Backup & Restore Plans.
·       Working on MYISAM to INNODB engine.
·       Working on Server shifting , tuning parameter , database purging
·       Working on Mysql master slave Replication
·       Handling Release management and user acceptance.
·       Restore using xtrabackup.
·       Responsibilities include monitoring daily, weekly and monthly system maintenance tasks such as database backup, replication verification, database integrity verification and indexing updates
·       Work in 24/7 production database support. |
Database | TECHNICAL SKILL: Operating System LINUX, Windows Server 2012 R2, Windows 98, Windows 2000/ XP Tools & Utility Packages SQL* Loader, SQL*PLUS, OEM, Datapump, expdp/impdp, PLSQL Developer, Jenkins Database Oracle 10g, Oracle 11g, Oracle 12c Scripting UNIX Shell Scripting Language SQL Education Details
January 2011 M.B.A. Amravati, Maharashtra Amravati University
January 2007 B.C.A. Nagpur, Maharashtra Nagpur University
Oracle Database Administrator
ORACLE DATABASE ADMINISTRATOR ON LINUX/MICROSOFT WITH 4 YEARS EXPERIENCE.
Skill Details
ORACLE- Exprience - 48 months
LINUX- Exprience - 6 months
ORACLE DBA- Exprience - Less than 1 year months
RAC- Exprience - Less than 1 year months
GOLDEN GATE- Exprience - Less than 1 year months
ASM- Exprience - Less than 1 year months
DATAGUARD- Exprience - Less than 1 year monthsCompany Details
company - TIETO INDIA PVT. LTD
description - Pune From February 2015 till present
Project Profile:
Oil and Gas unit of Tieto India Pvt. Ltd. is working for Environmental Components (EC) application. Tieto is the authorized service provider in EC. Energy Components is a complete end-to-end hydrocarbon accounting solution following the hydrocarbons from production to transport, sales and revenue recognition. Globally market-leading hydrocarbon accounting software with functionality coverage exceeding other available solutions. Modern, flexible and scalable technology platform. Selected as the global standard and best practice by oil & gas super majors.
Responsibilities: ⢠Oracle Database Administration 11g R2, 12c and 18c ⢠Supporting databases in 24x7 environments and coordinate with Application, OS, Storage and Development Teams. Test and Production environments ⢠Regularly monitoring the trace files and Alert log files for database related issues.
⢠Experience in monitoring the CPU usage, IO and memory utilization at OS level.
⢠Checking the Alert log file to analyze the ORA errors if any to raise SR with Oracle.
⢠Monitoring the log files, backups, database space usage and the use of system resources.
⢠Configuring Backup (RMAN) for database and restoring database.
⢠Installation, configuring and updating Oracle server software and related Oracle products of 11g and 12C.
⢠Oracle Server installation, client installation and configuration, PLSQL developer installation.
⢠Creating database using DBCA and manually.
⢠Creating of Oracle user and granting proper privileges to user as per request.
⢠Creating AWR, ASH and ADDM reports for database performance analysis.
⢠Handling space management and performance issues in Oracle databases.
⢠Creating remote database link.
⢠Renaming and resizing of data files in Oracle database if needed.
⢠Tablespace shrinking with regular time interval to reclaim server space.
⢠Expertise in Export and Import using data pump in Oracle database.
⢠Expertise in Configuration of Listener and Tnsnames through NETMGR and NETCA and statically also.
⢠Managing Oracle Listener and Oracle Network Files.
⢠Creating user Profiles, granting specific privileges and roles to the users in Oracle database.
⢠Maintaining tablespaces & data files, Control files, Online Redo log files in Oracle database.
⢠Worked on AWS cloud services like EC2, S3, RDS, ELB, EBS, VPC, Route53, Auto, Cloud watch, Cloud Front, IAM for installing configuring and troubleshooting on various Amazon images for server migration from physical into cloud. |
Database | Technical Skills Databases: Oracle RDBMS- 10g, 11g & 12c Technology/utilities: Data Pump, RMAN, Data guard, ASM, RAC, Golden Gate Tools: OCC, PUTTY, SQLPLUS, SQL Developer, Netbackup, SCOM, SCCM, VMWare Vsphere Operating Systems: RHEL 6.0, RHEL 6.5, UNIX and Microsoft WindowsEducation Details
Database Administrator
Database Administrator - BNY Mellon
Skill Details
DATABASES- Exprience - 24 months
ORACLE- Exprience - 24 months
RMAN- Exprience - 24 months
NETBACKUP- Exprience - 24 months
SCOM- Exprience - 24 monthsCompany Details
company - BNY Mellon
description - Databases: 600+
Team Size: 8
Duration: Jan 2017 - Till Date
Clients: Over 130+ investment banking organizations who are hosted with Eagle
Responsibilities: Database Management (Support and managing critical production, Pre-production, test and reporting databases in different platforms), Capacity Management Upgrades.
⢠Handling day to day database activities monitoring and incident management.
⢠Building new databases as per the requirement and prepare them for go live with the help of multiple teams.
⢠Working on scheduled activity of database patching (CPU, PSU) ⢠Installing latest path on production, Dev and Test databases as per the suggestion from Oracle support.
⢠Database Upgrade from 11g and to 12c.
⢠Adding disks to ASM disk groups.
⢠Building DR database using Active Data guard, Make it sync with prod and resolving issues if persists ⢠Data Guard Management- Checking lagging status, removing lagging of archives, checking processes like RFS/MRP, Archives Management ⢠Working on tablespace related issues ⢠Managing user access and profiles ⢠Importing and exporting using datapump ⢠Maintaining inventory of all databases in the single centralize database ⢠Refreshing test environment from production database.
⢠Working with Oracle Support to resolve oracle errors.
⢠Schedule daily and weekly databases backup using RMAN, Troubleshooting in RMAN issues.
⢠Database cloning using RMAN.
⢠Take part in cutover to upgrade application to higher version.
⢠Strictly following ITIL process in incident management and change management.
⢠Providing weekly report of issues in team meeting also participating and suggesting service improvement plans.
⢠Database Migrations from one server to another or to different platforms ⢠RCA and impact analysis reporting during any production outage.
Previous Organization: Brose India
Project I: Central IT Management
company -
description - Responsibilities: Managing our internal databases and servers of Brose global.
⢠Providing 24x7 on-call support in the rotational shifts.
⢠Performing day-to-day database activity ⢠Monitoring and responding DBA group Mails for all alerts, issues and ad-hoc business user requests, etc.
⢠Database creation, patching ⢠Backup of Database in frequent cycles using Data pump/RMAN.
⢠Database refreshes using RMAN, Datapump.
⢠Recovery using copy of data / RMAN ⢠Monitoring logs and trace for resolving issues.
⢠Creating new VM servers and prepare it for go live, Also decommissioning as per requirements.
⢠Weekly patching of windows servers using SCCM and taking actions for patching if needed ⢠Monitoring and troubleshooting of daily and weekly OS backup using Symantec Netbackup ⢠Managing user accounts of OS users and database users ⢠Monitoring OS level alerts using SCOM
Project II: Data Center Migration (Onsite Project)
Responsibilities: Data center migration was one of activity for migration of our datacenter from one location to another. Where our all servers and databases were moved successfully.
⢠Installation of Oracle 11g on Linux platforms ⢠Worked on service requests (Incidents / Change / Request) ⢠Creation of users, managing user privileges ⢠Configured RMAN backup for databases ⢠Patching of databases ⢠Configuring physical standby database using Dataguard ⢠Cloning of servers and migrate to another cluster
ACADEMIA / PERSONAL DETAILS ⢠Bachelor of Engineering (B.E.) in Computer Science and Engineering From SGBAU Amravati University, Amravati in 2014 with CGPA of 7.21
Current Address:- Mr. Yogesh Tikhat, C/O: Raj Ahmad, Flat# G2-702, Dreams Aakruti, Kalepadal, Hadapsar, Pune - 411028
Highest Qualification BE (cse)
PAN: - AOFPT5052C |
Database | Software Skills: * RDBMS: MS SQL SERVER 2000/2005/2008 & 2012, 2014 * Operating Systems: WINDOWS XP/7, WINDOWS SERVER 2008, 12 * Fundamentals: MS Office 03/07 * Tools: SSMS, Performance Monitor, Sql profiler, SQL lite speed. Company name: Barclays Technology Centre India. Team Size: 24 Role: Database Administrator Support Description: Barclays Technology is a UK based retail & invest bank and 300 years of old bank.. It has operations in over 40 countries and employs approximately 120, 000 people. Barclays is organised into four core businesses: Personal & Corporate (Personal Banking, Corporate Banking, Wealth & Investment Management), Barclaycard, Investment Banking. Responsibilities: â Attending various calls from all over the world on various database issues. â Working on Web Gui alerts and resolving incident tickets within the time lines. â Troubleshoooting log shipping issues and fixing the related alerts. â Identifying and Resolving Blocking and locking related issues. â Configuration and monitoring Replication, Log shipping and mirroring setup. â Working on replication issues and Always ON issue. â Granting and revoking permissions on various account provisioning tasks. â Working on call support during the weekend and performing DR test's. and working on weekly maintenance jobs and weekend change requests. Education Details
B.Sc. Maths Kakatiya University Board secured
SQL server database administrator
Database administrator
Skill Details
DATABASE- Exprience - 120 months
DATABASE ADMINISTRATOR- Exprience - 72 months
MAINTENANCE- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
REPLICATION- Exprience - 48 monthsCompany Details
company - Barclays global services centre
description - SQL server databases implementation and maintenances
Log shipping, replication, High availability, clustering, performance tuning, database mirroring, Installation, configuration, upgradation, migration
company - Wipro Infotech Pvt Ltd
description - SQL server database administrator
company - CITI Bank
description - Worked as Database Support at Accord Fintech, Sanpada from Sep 2008 to 2013 Feb.
company -
description - 2012.
⢠Sound knowledge in Database Backup, Restore, Attach, and Detach and Disaster Recovery procedures.
⢠Developed backup and recovery strategies for production environment.
⢠Ensuring data consistency in the database through DBCC and DMV commands.
⢠Experience in query tuning and stored procedures and troubleshooting performance issues.
⢠Having hands on experience in DR process including log shipping and database mirroring.
⢠Experience in scheduling monitoring of Jobs.
⢠Experience in configure and troubleshooting in Always ON.
⢠Creating and Maintaining of Maintenance Plan.
⢠Expertise in planning and implementing MS SQL Server Security and Database permissions.
⢠Clear understanding of Implementation of Log Shipping, Replication and mirroring of databases between the servers.
⢠Performance Tuning (Performance Monitor, SQL Profiler Query Analyzer) ⢠Security for server & Databases Implementing security by creating roles/users,
Added users in roles, assigning rights to roles.
⢠Create and maintaining the linked servers between sql Instances.
⢠Create and maintaining and Database mail.
⢠Monitor and troubleshoot database issues.
⢠Creating DTS packages for executing the required tasks.
⢠Experts in create indexes, Maintaining indexes and rebuilds and reorganizes.
⢠Daily Maintenance of SQL Servers included reviewing
SQL error logs, Event Viewer. |
Database | Areas of Expertise ⢠Oracle Databases 12c, 11g, 10g ⢠Weblogic 12c, 11g ⢠Grid Infrastructure ⢠RMAN ⢠ASM ⢠Middleware: OIM, OAM, SOA ⢠Shell Scripts ⢠DataGuard ⢠Web servers - OHS, Apache ⢠Architecture Designs ⢠Proof of Concepts ⢠DevOpsEducation Details
January 2007 Bachelor of Engineering Information Technology Sangli, Maharashtra Walchand College
January 2004 Diploma Computer Engineering Jalgaon, Maharashtra Govt. Polytechnic
Lead Database Administrator
Lead Database Administrator - Tieto Software
Skill Details
DATABASES- Exprience - 108 months
MIDDLEWARE- Exprience - 96 months
RMAN- Exprience - 84 months
SHELL SCRIPTS- Exprience - 48 monthsCompany Details
company - Tieto Software
description - As a part of AO (Application Operations) team, scope in project is quite wide than typical database administration. Range of accomplishments are extended right from Data Tier to Middle Tier & Application Tier:
- Maximized availability of applications from 99.3% to 99.8%
- Raised business by presenting Proof of Concepts for 10+ cases
- Delivered upgrades of various applications time to time to keep it on supported platform
- Saved SLAs as per contract by means of handling P1, P2 issues effectively
- Produced Capacity reports comprising all layers (Data, Middleware, Web) of various applications
- Generated Work Orders as per customer need
company - Tieto Software
description - - Designed databases of various applications
- Planned RMAN backup and recovery, BCP strategies
- Executed Business Continuity Testing for various applications
- Introduced Zero Cost high availability solutions - Active-Passive Failover
- Optimized performance by means of scripting automation
- Established cloning procedures for all layers of various applications
- Delivered Infrastructure changes, like FW Openings & LoadBalancer configuration for new applications
- Eliminated downtime by troubleshoot issues for Middleware products - OIM, OAM, SOA
- Contributed to build & maintain Integration Layer- SMTP, ftp, Reverse Proxy, OCM
company - Tieto Software
description - - Provided database support to environments - PROD, UAT, TEST, DEV
- Performed Database Refresh/Cloning from production to development and support databases
- Reduced risk level by means of upgrading & patching databases time to time
- Protected databases by assigning appropriate roles and privileges as per SOD
- Generated & maintained Middleware schemas using RCU
- Exceeded scope of work by supporting & maintaining WebLogic platform - installation, patching, troubleshooting issues
- Expanded duty scope to web servers: Install & maintain- OHS, apache, tomcat
company - HSBC Software
description - Being part of project supporting HSBC Bank France, I achieved:
- Handled incidents & service requests as Day to day database administration tasks
- Delivered basic implementation services - Database installation, patching, upgrades
- Performed capacity planning - managing tablespaces, compressions
- Contributed in maintaining quality of databases - managing instances, indexes, re-organization, performance monitoring & tuning using AWR, ADDM reports
- Maintained backups & recovery of database - logical backups (exp/imp), datapump (expdp/impdp), cold backups, hot backups, RMAN backup/restore, RMAN Duplication
- Reduced efforts by automation - Value add initiatives which includes writing shell scripts for automated housekeeping operations, scheduling backups, use crontab/at to schedule tasks
- Implemented high availability solutions - Dataguard |
Database | Education Details
May 2011 to May 2014 Bachelor of science Information technology Mumbai, Maharashtra Mumbai university
Oracle DBA
Oracle database administrator
Skill Details
Installation of Oracle on RH Linux & Windows. Creating/Managing user profiles and analyzing their privileges and tablespace quotas Backup of database Logical and Physical procedures. Recovery of database in case of database crash, disk/media failure, etc. Standard DBA functions like space management, Rollback segments, Extents. Database Management and Monitoring the database. Willing to learn new things. Being a constructive team member, contributing practically to the success of the team.- Exprience - 48 monthsCompany Details
company - Accelya kale solutions ltd
description - Database Administrator working in 24*7 support environment maintaining Databases running on Oracle 11g, 12c.
Database Up-gradation from Oracle 11g to Oracle 12c.
Installation of Database critical patches.
Taking cold and hot backups on scheduled times and monitoring backups.
Importing the export dump to another database as per demands.
Automating most of the daily activities through cronjobs, shell scripts or schedulers.
Making Plan of Actions for Various Activities.
Raising SR with Oracle Support for different severity issues.
Handling the Userâs request and proper client interaction.
Monitoring & managing database growth, tablespaces, adding ,resizing and renaming the datafiles.
Restoration of database using RMAN backups for backup consistency checks.
Migration of Database using export / import and RMAN backups.
Configuring & managing Physical Standby database.
Creating database links, Tablespaces, database directories.
Managing network settings through listener.ora and tnsnames.ora files.
Restoration of data using old logical backup as per client request.
Schema replication across databases through data pump tool.
Taking cold and hot backups on scheduled times and monitoring backups
Taking EXPDP of database, database objects and a particular schema
Using SCP ticketing tool in order keeping track of client requests.Â
Performing Maintenance Activities such as Index Rebuilding and stats gather.
Troubleshooting the Basic Level performance issuesÂ
Setting up a new environment from database perspective within the requested timelines
Adding/Deleting disks in ASM and monitoring the ASM diskgroups.
Creating users & privileges with appropriate roles and levels of security.Â
Database Administrator working in 24*7 support environment maintaining Databases running on Oracle 11g, 12c.
Performing database online and offline database re-organization for database enhancement.Â
Migrating database from Non-ASM to ASM file system.
Grid up-gradation from 11g to 12C.
company - Insolutions Global Ltd
description - Oracle software installation(graphical/silent),Database upgrade,Patch upgrade.
Maintaining around 80+ UAT DB servers, 40 production DB and 28 standby/DR DB.
Managing/creating DR & standby servers, DB sync.
Backup and recovery (RMAN/ Datapump).
Performing activities like switchover and failover .
Allocating system storage and planning future storage requirements for the database system
Enrolling users and maintaining system security.
Monitoring Alert log, Snap ID generation, db size, Server space, OEM reports, User validity.
Controlling and monitoring user access to the database .
Scheduling shell scripts or dbms_jobs using Crontab or DBMS_SCHEDULER (monitoring script, listener check, backup script, AWR reports) etc.
Planning for backup and recovery of database.
Managing the production database for Oracle and SQL Server and resize the space of database/Datafiles/Tablespace/Transactional Logs.
Managing Temp and Undo tablespaces.
Creating primary database storage structures (tablespaces) after application developers have designed an application. |
Database | TECHNICAL SKILLS ⢠SQL ⢠Oracle v10, v11, v12 ⢠R programming, Python, linear regression, machine learning and statistical modelling techniques(obtained certification through Edvancer Eduventures training institute) KEY SKILLS ⢠Multitasking, working to meet client SLA in high pressure scenarios, handling sensitive clients along with improved skills at being a team player. ⢠Excellent communication skills and quick learner. ⢠Leadership qualities, team networking and courage to take up the problems proactively.Education Details
June 2012 Sadvidya Pre-University College
Application Database Administrator-DBMS (Oracle)
Application Database Administrator-DBMS (Oracle) - IBM India Pvt Ltd
Skill Details
CLIENTS- Exprience - 30 months
MACHINE LEARNING- Exprience - 30 months
ORACLE- Exprience - 30 months
SQL- Exprience - 30 months
EXCELLENT COMMUNICATION SKILLS- Exprience - 6 monthsCompany Details
company - IBM India Pvt Ltd
description - Client: Blue Cross Blue Shield MA: Massachusetts Health Insurance
⢠Used Oracle SQL to store and organize data. This includes capacity planning, installation, configuration, database
design, migration, security, troubleshooting, backup and data recovery.
⢠Worked with client databases installed on Oracle v10, v11, v12 on a Linux platform.
⢠Proficient communication with clients across locations facilitating data elicitation.
⢠Handling numerous business requests and solving them diligently within the given time frame and responding quickly and effectively to production issues within SLA.
⢠Leading a team in co ordination with business to conduct weekly checkouts of the database servers and systems
IBM Certifications
Statistics 101, Applied Data Science with R, Big Data Foundations, Data Science Foundations
Business Analytics Certification (Pune)
Worked on Retail and Banking projects, to design a predictive business model using machine learning techniques in
R programming for an efficient business and marketing strategy. |
Hadoop | Education Details
Hadoop Developer
Hadoop Developer - INFOSYS
Skill Details
Company Details
company - INFOSYS
description - Project Description: The banking information had stored the data in different data ware house systems for each department but it becomes difficult for the organization to manage the data and to perform some analytics on the past data, so it is combined them into a single global repository in Hadoop for analysis.
Responsibilities:
⢠Analyze the banking rates data set.
⢠Create specification document.
⢠Provide effort estimation.
⢠Develop SPARK Scala, SPARK SQL Programs using Eclipse IDE on Windows/Linux environment.
⢠Create KPI's test scenarios, test cases, test result document.
⢠Test the Scala programs in Linux Spark Standalone mode.
⢠setup multi cluster on AWS, deploy the Spark Scala programs
⢠Provided solution using Hadoop ecosystem - HDFS, MapReduce, Pig, Hive, HBase, and Zookeeper.
⢠Provided solution using large scale server-side systems with distributed processing algorithms.
⢠Created reports for the BI team using Sqoop to export data into HDFS and Hive.
⢠Provided solution in supporting and assisting in troubleshooting and optimization of MapReduce jobs and
Pig Latin scripts.
⢠Deep understanding of Hadoop design principles, cluster connectivity, security and the factors that affect
system performance.
⢠Worked on Importing and exporting data from different databases like Oracle, Teradata into HDFS and Hive
using Sqoop, TPT and Connect Direct.
⢠Import and export the data from RDBMS to HDFS/HBASE
⢠Wrote script and placed it in client side so that the data moved to HDFS will be stored in temporary file and then it will start loading it in hive tables.
⢠Developed the Sqoop scripts in order to make the interaction between Pig and MySQL Database.
⢠Involved in developing the Hive Reports, Partitions of Hive tables.
⢠Created and maintained technical documentation for launching HADOOP Clusters and for executing HIVE
queries and PIG scripts.
⢠Involved in running Hadoop jobs for processing millions of records of text data
Environment: Java, Hadoop, HDFS, Map-Reduce, Pig, Hive, Sqoop, Flume, Oozie, HBase, Spark, Scala,
Linux, NoSQL, Storm, Tomcat, Putty, SVN, GitHub, IBM WebSphere v8.5.
Project #1: TELECOMMUNICATIONS
Hadoop Developer
Description To identify customers who are likely to churn and 360-degree view of the customer is created from different heterogeneous data sources. The data is brought into data lake (HDFS) from different sources and analyzed using different Hadoop tools like pig and hive.
Responsibilities:
⢠Installed and Configured Apache Hadoop tools like Hive, Pig, HBase and Sqoop for application development and unit testing.
⢠Wrote MapReduce jobs to discover trends in data usage by users.
⢠Involved in database connection using SQOOP.
⢠Involved in creating Hive tables, loading data and writing hive queries Using the HiveQL.
⢠Involved in partitioning and joining Hive tables for Hive query optimization.
⢠Experienced in SQL DB Migration to HDFS.
⢠Used NoSQL(HBase) for faster performance, which maintains the data in the De-Normalized way for OLTP.
⢠The data is collected from distributed sources into Avro models. Applied transformations and standardizations and loaded into HBase for further data processing.
⢠Experienced in defining job flows.
⢠Used Oozie to orchestrate the workflow.
⢠Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce
jobs given by the users.
⢠Exported the analyzed data to the relational databases using HIVE for visualization and to generate reports for the BI team.
Environment: Hadoop, Hive, Linux, MapReduce, HDFS, Hive, Python, Pig, Sqoop, Cloudera, Shell Scripting,
Java (JDK 1.6), Java 6, Oracle 10g, PL/SQL, SQL*PLUS |
Hadoop | Skill Set: Hadoop, Map Reduce, HDFS, Hive, Sqoop, java. Duration: 2016 to 2017. Role: Hadoop Developer Rplus offers an quick, simple and powerful cloud based Solution, Demand Sense to accurately predict demand for your product in all your markets which Combines Enterprise and External Data to predict demand more accurately through Uses Social Conversation and Sentiments to derive demand and Identifies significant drivers of sale out of hordes of factors that Selects the best suited model out of multiple forecasting models for each product. Responsibilities: ⢠Involved in deploying the product for customers, gathering requirements and algorithm optimization at backend of the product. ⢠Load and transform Large Datasets of structured semi structured. ⢠Responsible to manage data coming from different sources and application ⢠Supported Map Reduce Programs those are running on the cluster ⢠Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.Education Details
Hadoop Developer
Hadoop Developer - Braindatawire
Skill Details
APACHE HADOOP HDFS- Exprience - 49 months
APACHE HADOOP SQOOP- Exprience - 49 months
Hadoop- Exprience - 49 months
HADOOP- Exprience - 49 months
HADOOP DISTRIBUTED FILE SYSTEM- Exprience - 49 monthsCompany Details
company - Braindatawire
description - Technical Skills:
⢠Programming: Core Java, Map Reduce, Scala
⢠Hadoop Tools: HDFS, Spark, Map Reduce, Sqoop, Hive, Hbase
⢠Database: MySQL, Oracle
⢠Scripting: Shell Scripting
⢠IDE: Eclipse
⢠Operating Systems: Linux (CentOS), Windows
⢠Source Control: Git (Github) |
Hadoop | ⢠Operating systems:-Linux- Ubuntu, Windows 2007/08 ⢠Other tools:- Tableau, SVN, Beyond Compare.Education Details
January 2016 Bachelors of Engineering Engineering Gujarat Technological University
Systems Engineer/Hadoop Developer
Systems Engineer/Hadoop Developer - Tata Consultancy Services
Skill Details
Hadoop,Spark,Sqoop,Hive,Flume,Pig- Exprience - 24 monthsCompany Details
company - Tata Consultancy Services
description - Roles and responsibility:
Working for a American pharmaceutical company (one of the world's premier
biopharmaceutical) who develops and produces medicines and vaccines for a wide range of medical
disciplines, including immunology, oncology, cardiology, endocrinology, and neurology. To handle large
amount of United Healthcare data big data analytics is used. Data from all possible data sources like records of all Patients(Old and New), records of medicines, Treatment Pathways & Patient Journey for
Health Outcomes, Patient Finder (or Rare Disease Patient Finder), etc being gathered, stored and processed at one place.
⢠Worked on cluster with specs as:
o Cluster Architecture: Fully
Distributed Package Used:
CDH3
o Cluster Capacity: 20 TB
o No. of Nodes: 10 Data Nodes + 3 Masters + NFS Backup For NN
⢠Developed proof of concepts for enterprise adoption of Hadoop.
⢠Used SparkAPI over Cloudera Hadoop YARN to perform analytics on the Healthcare data in Cloudera
distribution.
⢠Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and trouble-shooting, manage and review data backups, and reviewing Hadoop log files.
⢠Imported & exported large data sets of data into HDFS and vice-versa using sqoop.
⢠Involved developing the Pig scripts and Hive Reports
⢠Worked on Hive partition and bucketing concepts and created hive external and Internal tables with Hive
partition.Monitoring Hadoop scripts which take the input from HDFS and load the data into Hive.
⢠Developed Spark scripts by using Scala shell commands as per the requirement and worked with both
Data frames/SQL/Data sets and RDD/MapReduce in Spark. Optimizing of existing algorithms in Hadoop
using SparkContext, Spark-SQL, Data Frames and RDD's.
⢠Collaborated with infrastructure, network, database, application and BI to ensure data, quality and availability.
⢠Developed reports using TABLEAU and exported data to HDFS and hive using Sqoop.
⢠Used ORC & Parquet file formats for serialization of data, and Snappy for the compression of the data.
Achievements
⢠Appreciation for showing articulate leadership qualities in doing work with the team.
⢠Completed the internal certification of TCS Certified Hadoop Developer.
Ongoing Learning
⢠Preparing and scheduled the Cloudera Certified Spark Developer CCA 175. |
Hadoop | Areas of expertise ⢠Big Data Ecosystems: Hadoop-HDFS, MapReduce, Hive, Pig, Sqoop, HBase Oozie, Spark, Pyspark, HUE and having knowledge on cassandra ⢠Programming Languages: Python, Core Java and have an idea on Scala ⢠Databases: Oracle 10g, MySQL, Sqlserver NoSQL - HBase, Cassandra ⢠Tools: Eclipse, Toad, FTP, Tectia, Putty, Autosys, Anaconda, Jupyter notebool and Devops - RTC, RLM. ⢠Scripting Languages: JSP ⢠Platforms: Windows, UnixEducation Details
M.Tech (IT-DBS) B.Tech (CSE) SRM University
Software Engineer
Software Engineer - Larsen and Toubro
Skill Details
Company Details
company - Larsen and Toubro
description - Worked as a Software Engineer in Technosoft Corporation, Chennai from Aug 2015 to sep 2016.
company - Current Project
description - Duration: September 2016 to Till date
Vendor: Citi bank
Description:
Citibank's (Citi) Anti-Money Laundering (AML) Transaction Monitoring (TM) program is a future state solution and a rules-based system for transaction monitoring of ICG-Markets business.
Roles and Responesbilities:
⢠Building and providing domain knowledge for Anti Money Laundering among team members.
⢠The layered architecture has Data Warehouse and Workspace layers which are used by Business Analysts.
⢠Actively involved in designing of star-schema model involving various Dimensions and Fact tables.
⢠Designed SCD2 for maintaining history of the DIM data.
⢠Developing Hive Queries for mapping data between different layers of architecture, and it's usage in Oozie Workflows.
⢠Integration with Data Quality and Reconciliation Module.
⢠Regression and Integration testing of solution for any issues in integration with other modules and effectively testing the data flow from layer-to-layer.
⢠Transaction monitoring system development to generate Alerts for the suspicious and fraudulent transactions based on requirements provide by BAs.
⢠Developing spark Jobs for various business rules.
⢠Learning "Machine Learning", which will be used further in the project for developing an effective model for Fraud detection for Anti Money Laundering system.
⢠Scheduling Jobs using Autosys tool.
⢠Deployment and Code Management using RTC and RLM(Release Lifecycle Management)
Hadoop Developer
# Current Project: PRTS - RAN
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Sqoop, HBase, Tez, Tableau, Sqlserver, Teradata
Cluster Size: 96 Node Cluster.
Distribution: Horton works - HDP2.3
company - Alcatel lucent
description - 1X) and Ruckus Wireless
Description:
The scope of this project is to maintain and store the operational and parameters data collected from the multiple vendors networks by the mediation team into the OMS data store and make it available for RF engineers to boost the network performance.
Responsibilities:
⢠Working with Hadoop Distributed File System.
⢠Involved in importing data from MySQL to HDFS using SQOOP.
⢠Involved in creating Hive tables, loading with data and writing hive queries which will run on top of Tez execution Engine.
⢠Involved in Preparing Test cases Document.
⢠Involved in Integrating Hive and HBase to store the operational data.
⢠Monitoring the Jobs through Oozie.
company - Current Project
description - Anti - Money laundering
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Oozie, Spark, Unix, Autosys, Python, RTC, RLM, ETL Framwe work
Cluster Size: 56 Node Cluster.
Distribution: Cloudera 5.9.14 |
Hadoop | Technical Skill Set: Programming Languages Apache Hadoop, Python, shell scripting, SQL Technologies Hive, Pig, Sqoop, Flume, Oozie, Impala, hdfs Tools Dataiku, Unravel, Cloudera, Putty, HUE, Cloudera Manager, Eclipse, Resource Manager Initial Learning Program: Tata Consultancy Services: June 2015 to August 2015 Description: This is a learning program conducted by TCS for the newly joined employees, to accomplish them to learn the working standard of the organization. During this period employee are groomed with various technical as well as ethical aspects. Education Details
B.E. Electronics & Communication Indore, Madhya Pradesh Medi-caps Institute of Technology & Management
Hadoop developer
hadoop,hive,sqoop,flume,pig,mapreduce,python,impala,spark,scala,sql,unix.
Skill Details
APACHE HADOOP SQOOP- Exprience - 31 months
Hadoop- Exprience - 31 months
HADOOP- Exprience - 31 months
Hive- Exprience - 31 months
SQOOP- Exprience - 31 months
python- Exprience - Less than 1 year months
hdfs- Exprience - Less than 1 year months
unix- Exprience - Less than 1 year months
impala- Exprience - Less than 1 year months
pig- Exprience - Less than 1 year months
unravel- Exprience - Less than 1 year months
mapreduce- Exprience - Less than 1 year months
dataiku- Exprience - Less than 1 year monthsCompany Details
company - Tata Consultancy Services
description - Project Description
Data warehouse division has multiple products for injecting, storing, analysing and presenting data. The Data Lake program is started to provide multi-talent, secure data hub to store application's data on Hadoop platform with strong data governance, lineage, auditing and monitoring capabilities. The object of the project is to provide necessary engineering support to analytics and application teams so that they can focus on the business logic development. In this project, the major task is to set up the Hadoop cluster and govern all the activities which are required for the smooth functioning of various Hadoop ecosystems. As the day and day data increasing so to provide stability to the ecosystem and smooth working of it, Developing and automating the various requirement specific utilities.
Responsibility 1. Developed proactive Health Check utility for Data Lake. The utility proactively checks the smooth functioning of all Hadoop components on the cluster and sends the result to email in HTML format. The utility is being used for daily Health Checks as well as after upgrades.
2. Getting the data in different formats and processing the data in Hadoop ecosystem after filtering the data using the appropriate techniques.
3. Developed data pipeline utility to ingest data from RDBMS database to Hive external tables using Sqoop commands. The utility also offers the data quality check like row count validation.
4. Developed and automated various cluster health check, usage, capacity related reports using Unix shell scripting.
5. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
6. Creating flume agents to process the data to Hadoop ecosystem side.
7. Performed benchmark testing on the Hive Queries and impala queries.
8. Involved in setting up the cluster and its components like edge node and HA implementation of the services: Hive Server2, Impala, and HDFS.
9. Filtering the required data from available data using different technologies like pig, regex Serde etc.
10. Dataiku benchmark testing on top of impala and hive in compare to Greenplum database.
11. Moving the data from Greenplum database to Hadoop side with help of Sqoop pipeline, process the data to Hadoop side and storing the data into hive tables to do the performance testing.
12. Dealing with the Hadoop ecosystem related issues in order to provide stability to WM Hadoop ecosystem.
13. Rescheduling of job from autosys job hosting to TWS job hosting for better performance.
Declaration:
I hereby declare that the above mentioned information is authentic to the best of my knowledge
company - Tata Consultancy Services
description - Clients: 1. Barclays 2. Union bank of California (UBC) 3. Morgan Stanley (MS)
KEY PROJECTS HANDLED
Project Name ABSA- Reconciliations, UBC and WMDATALAKE COE
company - Tata Consultancy Services
description - Project Description
Migration of data from RDBMS database to Hive (Hadoop ecosystem) . Hadoop platform ability with strong data governance, lineage, auditing and monitoring capabilities. The objective of this project was to speed up the data processing so that the analysis and decision making become easy. Due to RDBMS limitations to process waste amount of data at once and produce the results at the earliest, Client wanted to move the data to Hadoop ecosystem so that they can over-come from those limitations and focus on business improvement only.
Responsibility 1. Optimising the SQL queries for those data which were not required to move from RDBMS to any other platform.
2. Writing the Hive queries and logic to move the data from RDBMS to Hadoop ecosystem.
3. Writing the hive queries to analyse the required data as per the business requirements.
4. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
5. Writing the sqoop commands and scripts to move the data from RDBMS to Hadoop side.
company - Tata Consultancy Services
description - Project Description
Create recs and migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
Responsibility 1. Have worked on extracting business requirements, analyzing and implementing them in developing Recs 2. Worked on migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
3. Done the back end work where most of the things were related to writing the sql queries and provide the data for the new recs.
Project Name PSO |
Hadoop | Technical Skills Programming Languages: C, C++, Java, .Net., J2EE, HTML5, CSS, MapReduce Scripting Languages: Javascript, Python Databases: Oracle (PL-SQL), MY-SQL, IBM DB2 Tools:IBM Rational Rose, R, Weka Operating Systems: Windows XP, Vista, UNIX, Windows 7, Red Hat 7Education Details
January 2015 B.E Pimpri Chinchwad, MAHARASHTRA, IN Pimpri Chinchwad College of Engineering
January 2012 Diploma MSBTE Dnyanganaga Polytechnic
S.S.C New English School Takali
Hadoop/Big Data Developer
Hadoop/Big Data Developer - British Telecom
Skill Details
APACHE HADOOP MAPREDUCE- Exprience - 37 months
MapReduce- Exprience - 37 months
MAPREDUCE- Exprience - 37 months
JAVA- Exprience - 32 months
.NET- Exprience - 6 monthsCompany Details
company - British Telecom
description - Project: British Telecom project (UK)
Responsibilities:
⢠Working on HDFS, MapReduce, Hive, Spark, Scala, Sqoop, Kerberos etc. technologies
⢠Implemented various data mining algorithms on Spark like K-means clustering, Random forest, Naïve bayes etc.
⢠A knowledge of installing, configuring, maintaining and securing Hadoop.
company - DXC technology
description - HPE legacy), Bangalore
⢠Worked on Hadoop + Java programming
⢠Worked on Azure and AWS (EMR) services.
⢠Worked on HDInsight Hadoop cluster..
⢠Design, develop, document and architect Hadoop applications
⢠Develop MapReduce coding that works seamlessly on Hadoop clusters.
⢠Analyzing and processing the large data sets on HDFS.
⢠An analytical bent of mind and ability to learn-unlearn-relearn surely comes in handy. |
Hadoop | Technical Skill Set Big Data Ecosystems: Hadoop, HDFS, HBase, Map Reduce, Sqoop, Hive, Pig, Spark-Core, Flume. Other Language: Scala, Core-Java, SQL, PLSQL, Sell Scripting ETL Tools: Informatica Power Center8.x/9.6, Talend 5.6 Tools: Eclipse, Intellij Idea. Platforms: Windows Family, Linux /UNIX, Cloudera. Databases: MySQL, Oracle.10/11gEducation Details
M.C.A Pune, MAHARASHTRA, IN Pune University
Hodoop Developer
Hodoop Developer - PRGX India Private Limited Pune
Skill Details
Company Details
company - PRGX India Private Limited Pune
description - Team Size: 10+
Environment: Hive, Spark, Sqoop, Scala and Flume.
Project Description:
The bank wanted to help its customers to avail different products of the bank through analyzing their expenditure behavior. The customers spending ranges from online shopping, medical expenses in hospitals, cash transactions, and debit card usage etc. the behavior allows the bank to create an analytical report and based on which the bank used to display the product offers on the customer portal which was built using java. The portal allows the customers to login and see their transactions which they make on a day to day basis .the analytics also help the customers plan their budgets through the budget watch and my financial forecast applications embedded into the portal. The portal used hadoop framework to analyes the data as per the rules and regulations placed by the regulators from the respective countries. The offers and the interest rates also complied with the regulations and all these processing was done using the hadoop framework as big data analytics system.
Role & Responsibilities:
â Import data from legacy system to hadoop using Sqoop, flume.
â Implement the business logic to analyses the data
â Per-process data using spark.
â Create hive script and loading data into hive.
â Sourcing various attributes to the data processing logic to retrieve the correct results.
Project 2
company - PRGX India Private Limited Pune
description -
company - PRGX India Private Limited Pune
description - Team Size: 11+
Environment: Hadoop, HDFS, Hive, Sqoop, MySQL, Map Reduce
Project Description:-
The Purpose of this project is to store terabytes of information from the web application and extract meaningful information out of it.the solution was based on the open source s/w hadoop. The data will be stored in hadoop file system and processed using Map/Reduce jobs. Which in trun includes getting the raw html data from the micro websites, process the html to obtain product and user information, extract various reports out of the vistor tracking information and export the information for further processing
Role & Responsibilities:
â Move all crawl data flat files generated from various micro sites to HDFS for further processing.
â Sqoop implementation for interaction with database
â Write Map Reduce scripts to process the data file.
â Create hive tables to store the processed data in tabular formats.
â Reports creation from hive data.
Project 3
company - PRGX India Private Limited Pune
description - Team Size: 15+
Environment: Informatica 9.5, Oracle11g, UNIX
Project Description:
Pfizer Inc. is an American global pharmaceutical corporation headquartered in New York City. The main objective of the project is to build a Development Data Repository for Pfizer Inc. Because all the downstream application are like Etrack, TSP database, RTS, SADMS, GFS, GDO having their own sql request on the OLTP system directly due to which the performance of OLTP system goes slows down. For this we have created a Development Data Repository to replace the entire sql request directly on the OLTP system. DDR process extracts all clinical, pre-clinical, study, product, subject, sites related information from the upstream applications like EPECS, CDSS, RCM, PRC, E-CLINICAL, EDH and after applying some business logic put it into DDR core tables. From these snapshot and dimensional layer are created which are used for reporting application.
Role & Responsibilities:
â To understand & analyze the requirement documents and resolve the queries.
â To design Informatica mappings by using various basic transformations like Filter, Router, Source qualifier, Lookup etc and advance transformations like Aggregators, Joiner, Sorters and so on.
â Perform cross Unit and Integration testing for mappings developed within the team. Reporting bugs and bug fixing.
â Create workflow/batches and set the session dependencies.
â Implemented Change Data Capture using mapping parameters, SCD and SK generation.
â Developed Mapplet, reusable transformations to populate the data into data warehouse.
â Created Sessions & Worklets using workflow Manager to load the data into the Target Database.
â Involved in Unit Case Testing (UTC)
â Performing Unit Testing and UAT for SCD Type1/Type2, fact load and CDC implementation.
Personal Scan
Address: Jijayi Heights, Flat no 118, Narhe, (Police chowki) Pune- 411041 |
Hadoop | Education Details
Hadoop Developer
Hadoop Developer - INFOSYS
Skill Details
Company Details
company - INFOSYS
description - Project Description: The banking information had stored the data in different data ware house systems for each department but it becomes difficult for the organization to manage the data and to perform some analytics on the past data, so it is combined them into a single global repository in Hadoop for analysis.
Responsibilities:
⢠Analyze the banking rates data set.
⢠Create specification document.
⢠Provide effort estimation.
⢠Develop SPARK Scala, SPARK SQL Programs using Eclipse IDE on Windows/Linux environment.
⢠Create KPI's test scenarios, test cases, test result document.
⢠Test the Scala programs in Linux Spark Standalone mode.
⢠setup multi cluster on AWS, deploy the Spark Scala programs
⢠Provided solution using Hadoop ecosystem - HDFS, MapReduce, Pig, Hive, HBase, and Zookeeper.
⢠Provided solution using large scale server-side systems with distributed processing algorithms.
⢠Created reports for the BI team using Sqoop to export data into HDFS and Hive.
⢠Provided solution in supporting and assisting in troubleshooting and optimization of MapReduce jobs and
Pig Latin scripts.
⢠Deep understanding of Hadoop design principles, cluster connectivity, security and the factors that affect
system performance.
⢠Worked on Importing and exporting data from different databases like Oracle, Teradata into HDFS and Hive
using Sqoop, TPT and Connect Direct.
⢠Import and export the data from RDBMS to HDFS/HBASE
⢠Wrote script and placed it in client side so that the data moved to HDFS will be stored in temporary file and then it will start loading it in hive tables.
⢠Developed the Sqoop scripts in order to make the interaction between Pig and MySQL Database.
⢠Involved in developing the Hive Reports, Partitions of Hive tables.
⢠Created and maintained technical documentation for launching HADOOP Clusters and for executing HIVE
queries and PIG scripts.
⢠Involved in running Hadoop jobs for processing millions of records of text data
Environment: Java, Hadoop, HDFS, Map-Reduce, Pig, Hive, Sqoop, Flume, Oozie, HBase, Spark, Scala,
Linux, NoSQL, Storm, Tomcat, Putty, SVN, GitHub, IBM WebSphere v8.5.
Project #1: TELECOMMUNICATIONS
Hadoop Developer
Description To identify customers who are likely to churn and 360-degree view of the customer is created from different heterogeneous data sources. The data is brought into data lake (HDFS) from different sources and analyzed using different Hadoop tools like pig and hive.
Responsibilities:
⢠Installed and Configured Apache Hadoop tools like Hive, Pig, HBase and Sqoop for application development and unit testing.
⢠Wrote MapReduce jobs to discover trends in data usage by users.
⢠Involved in database connection using SQOOP.
⢠Involved in creating Hive tables, loading data and writing hive queries Using the HiveQL.
⢠Involved in partitioning and joining Hive tables for Hive query optimization.
⢠Experienced in SQL DB Migration to HDFS.
⢠Used NoSQL(HBase) for faster performance, which maintains the data in the De-Normalized way for OLTP.
⢠The data is collected from distributed sources into Avro models. Applied transformations and standardizations and loaded into HBase for further data processing.
⢠Experienced in defining job flows.
⢠Used Oozie to orchestrate the workflow.
⢠Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce
jobs given by the users.
⢠Exported the analyzed data to the relational databases using HIVE for visualization and to generate reports for the BI team.
Environment: Hadoop, Hive, Linux, MapReduce, HDFS, Hive, Python, Pig, Sqoop, Cloudera, Shell Scripting,
Java (JDK 1.6), Java 6, Oracle 10g, PL/SQL, SQL*PLUS |
Hadoop | Skill Set: Hadoop, Map Reduce, HDFS, Hive, Sqoop, java. Duration: 2016 to 2017. Role: Hadoop Developer Rplus offers an quick, simple and powerful cloud based Solution, Demand Sense to accurately predict demand for your product in all your markets which Combines Enterprise and External Data to predict demand more accurately through Uses Social Conversation and Sentiments to derive demand and Identifies significant drivers of sale out of hordes of factors that Selects the best suited model out of multiple forecasting models for each product. Responsibilities: ⢠Involved in deploying the product for customers, gathering requirements and algorithm optimization at backend of the product. ⢠Load and transform Large Datasets of structured semi structured. ⢠Responsible to manage data coming from different sources and application ⢠Supported Map Reduce Programs those are running on the cluster ⢠Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.Education Details
Hadoop Developer
Hadoop Developer - Braindatawire
Skill Details
APACHE HADOOP HDFS- Exprience - 49 months
APACHE HADOOP SQOOP- Exprience - 49 months
Hadoop- Exprience - 49 months
HADOOP- Exprience - 49 months
HADOOP DISTRIBUTED FILE SYSTEM- Exprience - 49 monthsCompany Details
company - Braindatawire
description - Technical Skills:
⢠Programming: Core Java, Map Reduce, Scala
⢠Hadoop Tools: HDFS, Spark, Map Reduce, Sqoop, Hive, Hbase
⢠Database: MySQL, Oracle
⢠Scripting: Shell Scripting
⢠IDE: Eclipse
⢠Operating Systems: Linux (CentOS), Windows
⢠Source Control: Git (Github) |
Hadoop | ⢠Operating systems:-Linux- Ubuntu, Windows 2007/08 ⢠Other tools:- Tableau, SVN, Beyond Compare.Education Details
January 2016 Bachelors of Engineering Engineering Gujarat Technological University
Systems Engineer/Hadoop Developer
Systems Engineer/Hadoop Developer - Tata Consultancy Services
Skill Details
Hadoop,Spark,Sqoop,Hive,Flume,Pig- Exprience - 24 monthsCompany Details
company - Tata Consultancy Services
description - Roles and responsibility:
Working for a American pharmaceutical company (one of the world's premier
biopharmaceutical) who develops and produces medicines and vaccines for a wide range of medical
disciplines, including immunology, oncology, cardiology, endocrinology, and neurology. To handle large
amount of United Healthcare data big data analytics is used. Data from all possible data sources like records of all Patients(Old and New), records of medicines, Treatment Pathways & Patient Journey for
Health Outcomes, Patient Finder (or Rare Disease Patient Finder), etc being gathered, stored and processed at one place.
⢠Worked on cluster with specs as:
o Cluster Architecture: Fully
Distributed Package Used:
CDH3
o Cluster Capacity: 20 TB
o No. of Nodes: 10 Data Nodes + 3 Masters + NFS Backup For NN
⢠Developed proof of concepts for enterprise adoption of Hadoop.
⢠Used SparkAPI over Cloudera Hadoop YARN to perform analytics on the Healthcare data in Cloudera
distribution.
⢠Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and trouble-shooting, manage and review data backups, and reviewing Hadoop log files.
⢠Imported & exported large data sets of data into HDFS and vice-versa using sqoop.
⢠Involved developing the Pig scripts and Hive Reports
⢠Worked on Hive partition and bucketing concepts and created hive external and Internal tables with Hive
partition.Monitoring Hadoop scripts which take the input from HDFS and load the data into Hive.
⢠Developed Spark scripts by using Scala shell commands as per the requirement and worked with both
Data frames/SQL/Data sets and RDD/MapReduce in Spark. Optimizing of existing algorithms in Hadoop
using SparkContext, Spark-SQL, Data Frames and RDD's.
⢠Collaborated with infrastructure, network, database, application and BI to ensure data, quality and availability.
⢠Developed reports using TABLEAU and exported data to HDFS and hive using Sqoop.
⢠Used ORC & Parquet file formats for serialization of data, and Snappy for the compression of the data.
Achievements
⢠Appreciation for showing articulate leadership qualities in doing work with the team.
⢠Completed the internal certification of TCS Certified Hadoop Developer.
Ongoing Learning
⢠Preparing and scheduled the Cloudera Certified Spark Developer CCA 175. |
Hadoop | Areas of expertise ⢠Big Data Ecosystems: Hadoop-HDFS, MapReduce, Hive, Pig, Sqoop, HBase Oozie, Spark, Pyspark, HUE and having knowledge on cassandra ⢠Programming Languages: Python, Core Java and have an idea on Scala ⢠Databases: Oracle 10g, MySQL, Sqlserver NoSQL - HBase, Cassandra ⢠Tools: Eclipse, Toad, FTP, Tectia, Putty, Autosys, Anaconda, Jupyter notebool and Devops - RTC, RLM. ⢠Scripting Languages: JSP ⢠Platforms: Windows, UnixEducation Details
M.Tech (IT-DBS) B.Tech (CSE) SRM University
Software Engineer
Software Engineer - Larsen and Toubro
Skill Details
Company Details
company - Larsen and Toubro
description - Worked as a Software Engineer in Technosoft Corporation, Chennai from Aug 2015 to sep 2016.
company - Current Project
description - Duration: September 2016 to Till date
Vendor: Citi bank
Description:
Citibank's (Citi) Anti-Money Laundering (AML) Transaction Monitoring (TM) program is a future state solution and a rules-based system for transaction monitoring of ICG-Markets business.
Roles and Responesbilities:
⢠Building and providing domain knowledge for Anti Money Laundering among team members.
⢠The layered architecture has Data Warehouse and Workspace layers which are used by Business Analysts.
⢠Actively involved in designing of star-schema model involving various Dimensions and Fact tables.
⢠Designed SCD2 for maintaining history of the DIM data.
⢠Developing Hive Queries for mapping data between different layers of architecture, and it's usage in Oozie Workflows.
⢠Integration with Data Quality and Reconciliation Module.
⢠Regression and Integration testing of solution for any issues in integration with other modules and effectively testing the data flow from layer-to-layer.
⢠Transaction monitoring system development to generate Alerts for the suspicious and fraudulent transactions based on requirements provide by BAs.
⢠Developing spark Jobs for various business rules.
⢠Learning "Machine Learning", which will be used further in the project for developing an effective model for Fraud detection for Anti Money Laundering system.
⢠Scheduling Jobs using Autosys tool.
⢠Deployment and Code Management using RTC and RLM(Release Lifecycle Management)
Hadoop Developer
# Current Project: PRTS - RAN
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Sqoop, HBase, Tez, Tableau, Sqlserver, Teradata
Cluster Size: 96 Node Cluster.
Distribution: Horton works - HDP2.3
company - Alcatel lucent
description - 1X) and Ruckus Wireless
Description:
The scope of this project is to maintain and store the operational and parameters data collected from the multiple vendors networks by the mediation team into the OMS data store and make it available for RF engineers to boost the network performance.
Responsibilities:
⢠Working with Hadoop Distributed File System.
⢠Involved in importing data from MySQL to HDFS using SQOOP.
⢠Involved in creating Hive tables, loading with data and writing hive queries which will run on top of Tez execution Engine.
⢠Involved in Preparing Test cases Document.
⢠Involved in Integrating Hive and HBase to store the operational data.
⢠Monitoring the Jobs through Oozie.
company - Current Project
description - Anti - Money laundering
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Oozie, Spark, Unix, Autosys, Python, RTC, RLM, ETL Framwe work
Cluster Size: 56 Node Cluster.
Distribution: Cloudera 5.9.14 |
Hadoop | Technical Skill Set: Programming Languages Apache Hadoop, Python, shell scripting, SQL Technologies Hive, Pig, Sqoop, Flume, Oozie, Impala, hdfs Tools Dataiku, Unravel, Cloudera, Putty, HUE, Cloudera Manager, Eclipse, Resource Manager Initial Learning Program: Tata Consultancy Services: June 2015 to August 2015 Description: This is a learning program conducted by TCS for the newly joined employees, to accomplish them to learn the working standard of the organization. During this period employee are groomed with various technical as well as ethical aspects. Education Details
B.E. Electronics & Communication Indore, Madhya Pradesh Medi-caps Institute of Technology & Management
Hadoop developer
hadoop,hive,sqoop,flume,pig,mapreduce,python,impala,spark,scala,sql,unix.
Skill Details
APACHE HADOOP SQOOP- Exprience - 31 months
Hadoop- Exprience - 31 months
HADOOP- Exprience - 31 months
Hive- Exprience - 31 months
SQOOP- Exprience - 31 months
python- Exprience - Less than 1 year months
hdfs- Exprience - Less than 1 year months
unix- Exprience - Less than 1 year months
impala- Exprience - Less than 1 year months
pig- Exprience - Less than 1 year months
unravel- Exprience - Less than 1 year months
mapreduce- Exprience - Less than 1 year months
dataiku- Exprience - Less than 1 year monthsCompany Details
company - Tata Consultancy Services
description - Project Description
Data warehouse division has multiple products for injecting, storing, analysing and presenting data. The Data Lake program is started to provide multi-talent, secure data hub to store application's data on Hadoop platform with strong data governance, lineage, auditing and monitoring capabilities. The object of the project is to provide necessary engineering support to analytics and application teams so that they can focus on the business logic development. In this project, the major task is to set up the Hadoop cluster and govern all the activities which are required for the smooth functioning of various Hadoop ecosystems. As the day and day data increasing so to provide stability to the ecosystem and smooth working of it, Developing and automating the various requirement specific utilities.
Responsibility 1. Developed proactive Health Check utility for Data Lake. The utility proactively checks the smooth functioning of all Hadoop components on the cluster and sends the result to email in HTML format. The utility is being used for daily Health Checks as well as after upgrades.
2. Getting the data in different formats and processing the data in Hadoop ecosystem after filtering the data using the appropriate techniques.
3. Developed data pipeline utility to ingest data from RDBMS database to Hive external tables using Sqoop commands. The utility also offers the data quality check like row count validation.
4. Developed and automated various cluster health check, usage, capacity related reports using Unix shell scripting.
5. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
6. Creating flume agents to process the data to Hadoop ecosystem side.
7. Performed benchmark testing on the Hive Queries and impala queries.
8. Involved in setting up the cluster and its components like edge node and HA implementation of the services: Hive Server2, Impala, and HDFS.
9. Filtering the required data from available data using different technologies like pig, regex Serde etc.
10. Dataiku benchmark testing on top of impala and hive in compare to Greenplum database.
11. Moving the data from Greenplum database to Hadoop side with help of Sqoop pipeline, process the data to Hadoop side and storing the data into hive tables to do the performance testing.
12. Dealing with the Hadoop ecosystem related issues in order to provide stability to WM Hadoop ecosystem.
13. Rescheduling of job from autosys job hosting to TWS job hosting for better performance.
Declaration:
I hereby declare that the above mentioned information is authentic to the best of my knowledge
company - Tata Consultancy Services
description - Clients: 1. Barclays 2. Union bank of California (UBC) 3. Morgan Stanley (MS)
KEY PROJECTS HANDLED
Project Name ABSA- Reconciliations, UBC and WMDATALAKE COE
company - Tata Consultancy Services
description - Project Description
Migration of data from RDBMS database to Hive (Hadoop ecosystem) . Hadoop platform ability with strong data governance, lineage, auditing and monitoring capabilities. The objective of this project was to speed up the data processing so that the analysis and decision making become easy. Due to RDBMS limitations to process waste amount of data at once and produce the results at the earliest, Client wanted to move the data to Hadoop ecosystem so that they can over-come from those limitations and focus on business improvement only.
Responsibility 1. Optimising the SQL queries for those data which were not required to move from RDBMS to any other platform.
2. Writing the Hive queries and logic to move the data from RDBMS to Hadoop ecosystem.
3. Writing the hive queries to analyse the required data as per the business requirements.
4. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
5. Writing the sqoop commands and scripts to move the data from RDBMS to Hadoop side.
company - Tata Consultancy Services
description - Project Description
Create recs and migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
Responsibility 1. Have worked on extracting business requirements, analyzing and implementing them in developing Recs 2. Worked on migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
3. Done the back end work where most of the things were related to writing the sql queries and provide the data for the new recs.
Project Name PSO |
Hadoop | Technical Skills Programming Languages: C, C++, Java, .Net., J2EE, HTML5, CSS, MapReduce Scripting Languages: Javascript, Python Databases: Oracle (PL-SQL), MY-SQL, IBM DB2 Tools:IBM Rational Rose, R, Weka Operating Systems: Windows XP, Vista, UNIX, Windows 7, Red Hat 7Education Details
January 2015 B.E Pimpri Chinchwad, MAHARASHTRA, IN Pimpri Chinchwad College of Engineering
January 2012 Diploma MSBTE Dnyanganaga Polytechnic
S.S.C New English School Takali
Hadoop/Big Data Developer
Hadoop/Big Data Developer - British Telecom
Skill Details
APACHE HADOOP MAPREDUCE- Exprience - 37 months
MapReduce- Exprience - 37 months
MAPREDUCE- Exprience - 37 months
JAVA- Exprience - 32 months
.NET- Exprience - 6 monthsCompany Details
company - British Telecom
description - Project: British Telecom project (UK)
Responsibilities:
⢠Working on HDFS, MapReduce, Hive, Spark, Scala, Sqoop, Kerberos etc. technologies
⢠Implemented various data mining algorithms on Spark like K-means clustering, Random forest, Naïve bayes etc.
⢠A knowledge of installing, configuring, maintaining and securing Hadoop.
company - DXC technology
description - HPE legacy), Bangalore
⢠Worked on Hadoop + Java programming
⢠Worked on Azure and AWS (EMR) services.
⢠Worked on HDInsight Hadoop cluster..
⢠Design, develop, document and architect Hadoop applications
⢠Develop MapReduce coding that works seamlessly on Hadoop clusters.
⢠Analyzing and processing the large data sets on HDFS.
⢠An analytical bent of mind and ability to learn-unlearn-relearn surely comes in handy. |
Hadoop | Technical Skill Set Big Data Ecosystems: Hadoop, HDFS, HBase, Map Reduce, Sqoop, Hive, Pig, Spark-Core, Flume. Other Language: Scala, Core-Java, SQL, PLSQL, Sell Scripting ETL Tools: Informatica Power Center8.x/9.6, Talend 5.6 Tools: Eclipse, Intellij Idea. Platforms: Windows Family, Linux /UNIX, Cloudera. Databases: MySQL, Oracle.10/11gEducation Details
M.C.A Pune, MAHARASHTRA, IN Pune University
Hodoop Developer
Hodoop Developer - PRGX India Private Limited Pune
Skill Details
Company Details
company - PRGX India Private Limited Pune
description - Team Size: 10+
Environment: Hive, Spark, Sqoop, Scala and Flume.
Project Description:
The bank wanted to help its customers to avail different products of the bank through analyzing their expenditure behavior. The customers spending ranges from online shopping, medical expenses in hospitals, cash transactions, and debit card usage etc. the behavior allows the bank to create an analytical report and based on which the bank used to display the product offers on the customer portal which was built using java. The portal allows the customers to login and see their transactions which they make on a day to day basis .the analytics also help the customers plan their budgets through the budget watch and my financial forecast applications embedded into the portal. The portal used hadoop framework to analyes the data as per the rules and regulations placed by the regulators from the respective countries. The offers and the interest rates also complied with the regulations and all these processing was done using the hadoop framework as big data analytics system.
Role & Responsibilities:
â Import data from legacy system to hadoop using Sqoop, flume.
â Implement the business logic to analyses the data
â Per-process data using spark.
â Create hive script and loading data into hive.
â Sourcing various attributes to the data processing logic to retrieve the correct results.
Project 2
company - PRGX India Private Limited Pune
description -
company - PRGX India Private Limited Pune
description - Team Size: 11+
Environment: Hadoop, HDFS, Hive, Sqoop, MySQL, Map Reduce
Project Description:-
The Purpose of this project is to store terabytes of information from the web application and extract meaningful information out of it.the solution was based on the open source s/w hadoop. The data will be stored in hadoop file system and processed using Map/Reduce jobs. Which in trun includes getting the raw html data from the micro websites, process the html to obtain product and user information, extract various reports out of the vistor tracking information and export the information for further processing
Role & Responsibilities:
â Move all crawl data flat files generated from various micro sites to HDFS for further processing.
â Sqoop implementation for interaction with database
â Write Map Reduce scripts to process the data file.
â Create hive tables to store the processed data in tabular formats.
â Reports creation from hive data.
Project 3
company - PRGX India Private Limited Pune
description - Team Size: 15+
Environment: Informatica 9.5, Oracle11g, UNIX
Project Description:
Pfizer Inc. is an American global pharmaceutical corporation headquartered in New York City. The main objective of the project is to build a Development Data Repository for Pfizer Inc. Because all the downstream application are like Etrack, TSP database, RTS, SADMS, GFS, GDO having their own sql request on the OLTP system directly due to which the performance of OLTP system goes slows down. For this we have created a Development Data Repository to replace the entire sql request directly on the OLTP system. DDR process extracts all clinical, pre-clinical, study, product, subject, sites related information from the upstream applications like EPECS, CDSS, RCM, PRC, E-CLINICAL, EDH and after applying some business logic put it into DDR core tables. From these snapshot and dimensional layer are created which are used for reporting application.
Role & Responsibilities:
â To understand & analyze the requirement documents and resolve the queries.
â To design Informatica mappings by using various basic transformations like Filter, Router, Source qualifier, Lookup etc and advance transformations like Aggregators, Joiner, Sorters and so on.
â Perform cross Unit and Integration testing for mappings developed within the team. Reporting bugs and bug fixing.
â Create workflow/batches and set the session dependencies.
â Implemented Change Data Capture using mapping parameters, SCD and SK generation.
â Developed Mapplet, reusable transformations to populate the data into data warehouse.
â Created Sessions & Worklets using workflow Manager to load the data into the Target Database.
â Involved in Unit Case Testing (UTC)
â Performing Unit Testing and UAT for SCD Type1/Type2, fact load and CDC implementation.
Personal Scan
Address: Jijayi Heights, Flat no 118, Narhe, (Police chowki) Pune- 411041 |
Hadoop | Education Details
Hadoop Developer
Hadoop Developer - INFOSYS
Skill Details
Company Details
company - INFOSYS
description - Project Description: The banking information had stored the data in different data ware house systems for each department but it becomes difficult for the organization to manage the data and to perform some analytics on the past data, so it is combined them into a single global repository in Hadoop for analysis.
Responsibilities:
⢠Analyze the banking rates data set.
⢠Create specification document.
⢠Provide effort estimation.
⢠Develop SPARK Scala, SPARK SQL Programs using Eclipse IDE on Windows/Linux environment.
⢠Create KPI's test scenarios, test cases, test result document.
⢠Test the Scala programs in Linux Spark Standalone mode.
⢠setup multi cluster on AWS, deploy the Spark Scala programs
⢠Provided solution using Hadoop ecosystem - HDFS, MapReduce, Pig, Hive, HBase, and Zookeeper.
⢠Provided solution using large scale server-side systems with distributed processing algorithms.
⢠Created reports for the BI team using Sqoop to export data into HDFS and Hive.
⢠Provided solution in supporting and assisting in troubleshooting and optimization of MapReduce jobs and
Pig Latin scripts.
⢠Deep understanding of Hadoop design principles, cluster connectivity, security and the factors that affect
system performance.
⢠Worked on Importing and exporting data from different databases like Oracle, Teradata into HDFS and Hive
using Sqoop, TPT and Connect Direct.
⢠Import and export the data from RDBMS to HDFS/HBASE
⢠Wrote script and placed it in client side so that the data moved to HDFS will be stored in temporary file and then it will start loading it in hive tables.
⢠Developed the Sqoop scripts in order to make the interaction between Pig and MySQL Database.
⢠Involved in developing the Hive Reports, Partitions of Hive tables.
⢠Created and maintained technical documentation for launching HADOOP Clusters and for executing HIVE
queries and PIG scripts.
⢠Involved in running Hadoop jobs for processing millions of records of text data
Environment: Java, Hadoop, HDFS, Map-Reduce, Pig, Hive, Sqoop, Flume, Oozie, HBase, Spark, Scala,
Linux, NoSQL, Storm, Tomcat, Putty, SVN, GitHub, IBM WebSphere v8.5.
Project #1: TELECOMMUNICATIONS
Hadoop Developer
Description To identify customers who are likely to churn and 360-degree view of the customer is created from different heterogeneous data sources. The data is brought into data lake (HDFS) from different sources and analyzed using different Hadoop tools like pig and hive.
Responsibilities:
⢠Installed and Configured Apache Hadoop tools like Hive, Pig, HBase and Sqoop for application development and unit testing.
⢠Wrote MapReduce jobs to discover trends in data usage by users.
⢠Involved in database connection using SQOOP.
⢠Involved in creating Hive tables, loading data and writing hive queries Using the HiveQL.
⢠Involved in partitioning and joining Hive tables for Hive query optimization.
⢠Experienced in SQL DB Migration to HDFS.
⢠Used NoSQL(HBase) for faster performance, which maintains the data in the De-Normalized way for OLTP.
⢠The data is collected from distributed sources into Avro models. Applied transformations and standardizations and loaded into HBase for further data processing.
⢠Experienced in defining job flows.
⢠Used Oozie to orchestrate the workflow.
⢠Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce
jobs given by the users.
⢠Exported the analyzed data to the relational databases using HIVE for visualization and to generate reports for the BI team.
Environment: Hadoop, Hive, Linux, MapReduce, HDFS, Hive, Python, Pig, Sqoop, Cloudera, Shell Scripting,
Java (JDK 1.6), Java 6, Oracle 10g, PL/SQL, SQL*PLUS |
Hadoop | Skill Set: Hadoop, Map Reduce, HDFS, Hive, Sqoop, java. Duration: 2016 to 2017. Role: Hadoop Developer Rplus offers an quick, simple and powerful cloud based Solution, Demand Sense to accurately predict demand for your product in all your markets which Combines Enterprise and External Data to predict demand more accurately through Uses Social Conversation and Sentiments to derive demand and Identifies significant drivers of sale out of hordes of factors that Selects the best suited model out of multiple forecasting models for each product. Responsibilities: ⢠Involved in deploying the product for customers, gathering requirements and algorithm optimization at backend of the product. ⢠Load and transform Large Datasets of structured semi structured. ⢠Responsible to manage data coming from different sources and application ⢠Supported Map Reduce Programs those are running on the cluster ⢠Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.Education Details
Hadoop Developer
Hadoop Developer - Braindatawire
Skill Details
APACHE HADOOP HDFS- Exprience - 49 months
APACHE HADOOP SQOOP- Exprience - 49 months
Hadoop- Exprience - 49 months
HADOOP- Exprience - 49 months
HADOOP DISTRIBUTED FILE SYSTEM- Exprience - 49 monthsCompany Details
company - Braindatawire
description - Technical Skills:
⢠Programming: Core Java, Map Reduce, Scala
⢠Hadoop Tools: HDFS, Spark, Map Reduce, Sqoop, Hive, Hbase
⢠Database: MySQL, Oracle
⢠Scripting: Shell Scripting
⢠IDE: Eclipse
⢠Operating Systems: Linux (CentOS), Windows
⢠Source Control: Git (Github) |
Hadoop | ⢠Operating systems:-Linux- Ubuntu, Windows 2007/08 ⢠Other tools:- Tableau, SVN, Beyond Compare.Education Details
January 2016 Bachelors of Engineering Engineering Gujarat Technological University
Systems Engineer/Hadoop Developer
Systems Engineer/Hadoop Developer - Tata Consultancy Services
Skill Details
Hadoop,Spark,Sqoop,Hive,Flume,Pig- Exprience - 24 monthsCompany Details
company - Tata Consultancy Services
description - Roles and responsibility:
Working for a American pharmaceutical company (one of the world's premier
biopharmaceutical) who develops and produces medicines and vaccines for a wide range of medical
disciplines, including immunology, oncology, cardiology, endocrinology, and neurology. To handle large
amount of United Healthcare data big data analytics is used. Data from all possible data sources like records of all Patients(Old and New), records of medicines, Treatment Pathways & Patient Journey for
Health Outcomes, Patient Finder (or Rare Disease Patient Finder), etc being gathered, stored and processed at one place.
⢠Worked on cluster with specs as:
o Cluster Architecture: Fully
Distributed Package Used:
CDH3
o Cluster Capacity: 20 TB
o No. of Nodes: 10 Data Nodes + 3 Masters + NFS Backup For NN
⢠Developed proof of concepts for enterprise adoption of Hadoop.
⢠Used SparkAPI over Cloudera Hadoop YARN to perform analytics on the Healthcare data in Cloudera
distribution.
⢠Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and trouble-shooting, manage and review data backups, and reviewing Hadoop log files.
⢠Imported & exported large data sets of data into HDFS and vice-versa using sqoop.
⢠Involved developing the Pig scripts and Hive Reports
⢠Worked on Hive partition and bucketing concepts and created hive external and Internal tables with Hive
partition.Monitoring Hadoop scripts which take the input from HDFS and load the data into Hive.
⢠Developed Spark scripts by using Scala shell commands as per the requirement and worked with both
Data frames/SQL/Data sets and RDD/MapReduce in Spark. Optimizing of existing algorithms in Hadoop
using SparkContext, Spark-SQL, Data Frames and RDD's.
⢠Collaborated with infrastructure, network, database, application and BI to ensure data, quality and availability.
⢠Developed reports using TABLEAU and exported data to HDFS and hive using Sqoop.
⢠Used ORC & Parquet file formats for serialization of data, and Snappy for the compression of the data.
Achievements
⢠Appreciation for showing articulate leadership qualities in doing work with the team.
⢠Completed the internal certification of TCS Certified Hadoop Developer.
Ongoing Learning
⢠Preparing and scheduled the Cloudera Certified Spark Developer CCA 175. |
Hadoop | Areas of expertise ⢠Big Data Ecosystems: Hadoop-HDFS, MapReduce, Hive, Pig, Sqoop, HBase Oozie, Spark, Pyspark, HUE and having knowledge on cassandra ⢠Programming Languages: Python, Core Java and have an idea on Scala ⢠Databases: Oracle 10g, MySQL, Sqlserver NoSQL - HBase, Cassandra ⢠Tools: Eclipse, Toad, FTP, Tectia, Putty, Autosys, Anaconda, Jupyter notebool and Devops - RTC, RLM. ⢠Scripting Languages: JSP ⢠Platforms: Windows, UnixEducation Details
M.Tech (IT-DBS) B.Tech (CSE) SRM University
Software Engineer
Software Engineer - Larsen and Toubro
Skill Details
Company Details
company - Larsen and Toubro
description - Worked as a Software Engineer in Technosoft Corporation, Chennai from Aug 2015 to sep 2016.
company - Current Project
description - Duration: September 2016 to Till date
Vendor: Citi bank
Description:
Citibank's (Citi) Anti-Money Laundering (AML) Transaction Monitoring (TM) program is a future state solution and a rules-based system for transaction monitoring of ICG-Markets business.
Roles and Responesbilities:
⢠Building and providing domain knowledge for Anti Money Laundering among team members.
⢠The layered architecture has Data Warehouse and Workspace layers which are used by Business Analysts.
⢠Actively involved in designing of star-schema model involving various Dimensions and Fact tables.
⢠Designed SCD2 for maintaining history of the DIM data.
⢠Developing Hive Queries for mapping data between different layers of architecture, and it's usage in Oozie Workflows.
⢠Integration with Data Quality and Reconciliation Module.
⢠Regression and Integration testing of solution for any issues in integration with other modules and effectively testing the data flow from layer-to-layer.
⢠Transaction monitoring system development to generate Alerts for the suspicious and fraudulent transactions based on requirements provide by BAs.
⢠Developing spark Jobs for various business rules.
⢠Learning "Machine Learning", which will be used further in the project for developing an effective model for Fraud detection for Anti Money Laundering system.
⢠Scheduling Jobs using Autosys tool.
⢠Deployment and Code Management using RTC and RLM(Release Lifecycle Management)
Hadoop Developer
# Current Project: PRTS - RAN
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Sqoop, HBase, Tez, Tableau, Sqlserver, Teradata
Cluster Size: 96 Node Cluster.
Distribution: Horton works - HDP2.3
company - Alcatel lucent
description - 1X) and Ruckus Wireless
Description:
The scope of this project is to maintain and store the operational and parameters data collected from the multiple vendors networks by the mediation team into the OMS data store and make it available for RF engineers to boost the network performance.
Responsibilities:
⢠Working with Hadoop Distributed File System.
⢠Involved in importing data from MySQL to HDFS using SQOOP.
⢠Involved in creating Hive tables, loading with data and writing hive queries which will run on top of Tez execution Engine.
⢠Involved in Preparing Test cases Document.
⢠Involved in Integrating Hive and HBase to store the operational data.
⢠Monitoring the Jobs through Oozie.
company - Current Project
description - Anti - Money laundering
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Oozie, Spark, Unix, Autosys, Python, RTC, RLM, ETL Framwe work
Cluster Size: 56 Node Cluster.
Distribution: Cloudera 5.9.14 |
Hadoop | Technical Skill Set: Programming Languages Apache Hadoop, Python, shell scripting, SQL Technologies Hive, Pig, Sqoop, Flume, Oozie, Impala, hdfs Tools Dataiku, Unravel, Cloudera, Putty, HUE, Cloudera Manager, Eclipse, Resource Manager Initial Learning Program: Tata Consultancy Services: June 2015 to August 2015 Description: This is a learning program conducted by TCS for the newly joined employees, to accomplish them to learn the working standard of the organization. During this period employee are groomed with various technical as well as ethical aspects. Education Details
B.E. Electronics & Communication Indore, Madhya Pradesh Medi-caps Institute of Technology & Management
Hadoop developer
hadoop,hive,sqoop,flume,pig,mapreduce,python,impala,spark,scala,sql,unix.
Skill Details
APACHE HADOOP SQOOP- Exprience - 31 months
Hadoop- Exprience - 31 months
HADOOP- Exprience - 31 months
Hive- Exprience - 31 months
SQOOP- Exprience - 31 months
python- Exprience - Less than 1 year months
hdfs- Exprience - Less than 1 year months
unix- Exprience - Less than 1 year months
impala- Exprience - Less than 1 year months
pig- Exprience - Less than 1 year months
unravel- Exprience - Less than 1 year months
mapreduce- Exprience - Less than 1 year months
dataiku- Exprience - Less than 1 year monthsCompany Details
company - Tata Consultancy Services
description - Project Description
Data warehouse division has multiple products for injecting, storing, analysing and presenting data. The Data Lake program is started to provide multi-talent, secure data hub to store application's data on Hadoop platform with strong data governance, lineage, auditing and monitoring capabilities. The object of the project is to provide necessary engineering support to analytics and application teams so that they can focus on the business logic development. In this project, the major task is to set up the Hadoop cluster and govern all the activities which are required for the smooth functioning of various Hadoop ecosystems. As the day and day data increasing so to provide stability to the ecosystem and smooth working of it, Developing and automating the various requirement specific utilities.
Responsibility 1. Developed proactive Health Check utility for Data Lake. The utility proactively checks the smooth functioning of all Hadoop components on the cluster and sends the result to email in HTML format. The utility is being used for daily Health Checks as well as after upgrades.
2. Getting the data in different formats and processing the data in Hadoop ecosystem after filtering the data using the appropriate techniques.
3. Developed data pipeline utility to ingest data from RDBMS database to Hive external tables using Sqoop commands. The utility also offers the data quality check like row count validation.
4. Developed and automated various cluster health check, usage, capacity related reports using Unix shell scripting.
5. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
6. Creating flume agents to process the data to Hadoop ecosystem side.
7. Performed benchmark testing on the Hive Queries and impala queries.
8. Involved in setting up the cluster and its components like edge node and HA implementation of the services: Hive Server2, Impala, and HDFS.
9. Filtering the required data from available data using different technologies like pig, regex Serde etc.
10. Dataiku benchmark testing on top of impala and hive in compare to Greenplum database.
11. Moving the data from Greenplum database to Hadoop side with help of Sqoop pipeline, process the data to Hadoop side and storing the data into hive tables to do the performance testing.
12. Dealing with the Hadoop ecosystem related issues in order to provide stability to WM Hadoop ecosystem.
13. Rescheduling of job from autosys job hosting to TWS job hosting for better performance.
Declaration:
I hereby declare that the above mentioned information is authentic to the best of my knowledge
company - Tata Consultancy Services
description - Clients: 1. Barclays 2. Union bank of California (UBC) 3. Morgan Stanley (MS)
KEY PROJECTS HANDLED
Project Name ABSA- Reconciliations, UBC and WMDATALAKE COE
company - Tata Consultancy Services
description - Project Description
Migration of data from RDBMS database to Hive (Hadoop ecosystem) . Hadoop platform ability with strong data governance, lineage, auditing and monitoring capabilities. The objective of this project was to speed up the data processing so that the analysis and decision making become easy. Due to RDBMS limitations to process waste amount of data at once and produce the results at the earliest, Client wanted to move the data to Hadoop ecosystem so that they can over-come from those limitations and focus on business improvement only.
Responsibility 1. Optimising the SQL queries for those data which were not required to move from RDBMS to any other platform.
2. Writing the Hive queries and logic to move the data from RDBMS to Hadoop ecosystem.
3. Writing the hive queries to analyse the required data as per the business requirements.
4. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
5. Writing the sqoop commands and scripts to move the data from RDBMS to Hadoop side.
company - Tata Consultancy Services
description - Project Description
Create recs and migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
Responsibility 1. Have worked on extracting business requirements, analyzing and implementing them in developing Recs 2. Worked on migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
3. Done the back end work where most of the things were related to writing the sql queries and provide the data for the new recs.
Project Name PSO |
Hadoop | Technical Skills Programming Languages: C, C++, Java, .Net., J2EE, HTML5, CSS, MapReduce Scripting Languages: Javascript, Python Databases: Oracle (PL-SQL), MY-SQL, IBM DB2 Tools:IBM Rational Rose, R, Weka Operating Systems: Windows XP, Vista, UNIX, Windows 7, Red Hat 7Education Details
January 2015 B.E Pimpri Chinchwad, MAHARASHTRA, IN Pimpri Chinchwad College of Engineering
January 2012 Diploma MSBTE Dnyanganaga Polytechnic
S.S.C New English School Takali
Hadoop/Big Data Developer
Hadoop/Big Data Developer - British Telecom
Skill Details
APACHE HADOOP MAPREDUCE- Exprience - 37 months
MapReduce- Exprience - 37 months
MAPREDUCE- Exprience - 37 months
JAVA- Exprience - 32 months
.NET- Exprience - 6 monthsCompany Details
company - British Telecom
description - Project: British Telecom project (UK)
Responsibilities:
⢠Working on HDFS, MapReduce, Hive, Spark, Scala, Sqoop, Kerberos etc. technologies
⢠Implemented various data mining algorithms on Spark like K-means clustering, Random forest, Naïve bayes etc.
⢠A knowledge of installing, configuring, maintaining and securing Hadoop.
company - DXC technology
description - HPE legacy), Bangalore
⢠Worked on Hadoop + Java programming
⢠Worked on Azure and AWS (EMR) services.
⢠Worked on HDInsight Hadoop cluster..
⢠Design, develop, document and architect Hadoop applications
⢠Develop MapReduce coding that works seamlessly on Hadoop clusters.
⢠Analyzing and processing the large data sets on HDFS.
⢠An analytical bent of mind and ability to learn-unlearn-relearn surely comes in handy. |
Hadoop | Technical Skill Set Big Data Ecosystems: Hadoop, HDFS, HBase, Map Reduce, Sqoop, Hive, Pig, Spark-Core, Flume. Other Language: Scala, Core-Java, SQL, PLSQL, Sell Scripting ETL Tools: Informatica Power Center8.x/9.6, Talend 5.6 Tools: Eclipse, Intellij Idea. Platforms: Windows Family, Linux /UNIX, Cloudera. Databases: MySQL, Oracle.10/11gEducation Details
M.C.A Pune, MAHARASHTRA, IN Pune University
Hodoop Developer
Hodoop Developer - PRGX India Private Limited Pune
Skill Details
Company Details
company - PRGX India Private Limited Pune
description - Team Size: 10+
Environment: Hive, Spark, Sqoop, Scala and Flume.
Project Description:
The bank wanted to help its customers to avail different products of the bank through analyzing their expenditure behavior. The customers spending ranges from online shopping, medical expenses in hospitals, cash transactions, and debit card usage etc. the behavior allows the bank to create an analytical report and based on which the bank used to display the product offers on the customer portal which was built using java. The portal allows the customers to login and see their transactions which they make on a day to day basis .the analytics also help the customers plan their budgets through the budget watch and my financial forecast applications embedded into the portal. The portal used hadoop framework to analyes the data as per the rules and regulations placed by the regulators from the respective countries. The offers and the interest rates also complied with the regulations and all these processing was done using the hadoop framework as big data analytics system.
Role & Responsibilities:
â Import data from legacy system to hadoop using Sqoop, flume.
â Implement the business logic to analyses the data
â Per-process data using spark.
â Create hive script and loading data into hive.
â Sourcing various attributes to the data processing logic to retrieve the correct results.
Project 2
company - PRGX India Private Limited Pune
description -
company - PRGX India Private Limited Pune
description - Team Size: 11+
Environment: Hadoop, HDFS, Hive, Sqoop, MySQL, Map Reduce
Project Description:-
The Purpose of this project is to store terabytes of information from the web application and extract meaningful information out of it.the solution was based on the open source s/w hadoop. The data will be stored in hadoop file system and processed using Map/Reduce jobs. Which in trun includes getting the raw html data from the micro websites, process the html to obtain product and user information, extract various reports out of the vistor tracking information and export the information for further processing
Role & Responsibilities:
â Move all crawl data flat files generated from various micro sites to HDFS for further processing.
â Sqoop implementation for interaction with database
â Write Map Reduce scripts to process the data file.
â Create hive tables to store the processed data in tabular formats.
â Reports creation from hive data.
Project 3
company - PRGX India Private Limited Pune
description - Team Size: 15+
Environment: Informatica 9.5, Oracle11g, UNIX
Project Description:
Pfizer Inc. is an American global pharmaceutical corporation headquartered in New York City. The main objective of the project is to build a Development Data Repository for Pfizer Inc. Because all the downstream application are like Etrack, TSP database, RTS, SADMS, GFS, GDO having their own sql request on the OLTP system directly due to which the performance of OLTP system goes slows down. For this we have created a Development Data Repository to replace the entire sql request directly on the OLTP system. DDR process extracts all clinical, pre-clinical, study, product, subject, sites related information from the upstream applications like EPECS, CDSS, RCM, PRC, E-CLINICAL, EDH and after applying some business logic put it into DDR core tables. From these snapshot and dimensional layer are created which are used for reporting application.
Role & Responsibilities:
â To understand & analyze the requirement documents and resolve the queries.
â To design Informatica mappings by using various basic transformations like Filter, Router, Source qualifier, Lookup etc and advance transformations like Aggregators, Joiner, Sorters and so on.
â Perform cross Unit and Integration testing for mappings developed within the team. Reporting bugs and bug fixing.
â Create workflow/batches and set the session dependencies.
â Implemented Change Data Capture using mapping parameters, SCD and SK generation.
â Developed Mapplet, reusable transformations to populate the data into data warehouse.
â Created Sessions & Worklets using workflow Manager to load the data into the Target Database.
â Involved in Unit Case Testing (UTC)
â Performing Unit Testing and UAT for SCD Type1/Type2, fact load and CDC implementation.
Personal Scan
Address: Jijayi Heights, Flat no 118, Narhe, (Police chowki) Pune- 411041 |
Hadoop | Education Details
Hadoop Developer
Hadoop Developer - INFOSYS
Skill Details
Company Details
company - INFOSYS
description - Project Description: The banking information had stored the data in different data ware house systems for each department but it becomes difficult for the organization to manage the data and to perform some analytics on the past data, so it is combined them into a single global repository in Hadoop for analysis.
Responsibilities:
⢠Analyze the banking rates data set.
⢠Create specification document.
⢠Provide effort estimation.
⢠Develop SPARK Scala, SPARK SQL Programs using Eclipse IDE on Windows/Linux environment.
⢠Create KPI's test scenarios, test cases, test result document.
⢠Test the Scala programs in Linux Spark Standalone mode.
⢠setup multi cluster on AWS, deploy the Spark Scala programs
⢠Provided solution using Hadoop ecosystem - HDFS, MapReduce, Pig, Hive, HBase, and Zookeeper.
⢠Provided solution using large scale server-side systems with distributed processing algorithms.
⢠Created reports for the BI team using Sqoop to export data into HDFS and Hive.
⢠Provided solution in supporting and assisting in troubleshooting and optimization of MapReduce jobs and
Pig Latin scripts.
⢠Deep understanding of Hadoop design principles, cluster connectivity, security and the factors that affect
system performance.
⢠Worked on Importing and exporting data from different databases like Oracle, Teradata into HDFS and Hive
using Sqoop, TPT and Connect Direct.
⢠Import and export the data from RDBMS to HDFS/HBASE
⢠Wrote script and placed it in client side so that the data moved to HDFS will be stored in temporary file and then it will start loading it in hive tables.
⢠Developed the Sqoop scripts in order to make the interaction between Pig and MySQL Database.
⢠Involved in developing the Hive Reports, Partitions of Hive tables.
⢠Created and maintained technical documentation for launching HADOOP Clusters and for executing HIVE
queries and PIG scripts.
⢠Involved in running Hadoop jobs for processing millions of records of text data
Environment: Java, Hadoop, HDFS, Map-Reduce, Pig, Hive, Sqoop, Flume, Oozie, HBase, Spark, Scala,
Linux, NoSQL, Storm, Tomcat, Putty, SVN, GitHub, IBM WebSphere v8.5.
Project #1: TELECOMMUNICATIONS
Hadoop Developer
Description To identify customers who are likely to churn and 360-degree view of the customer is created from different heterogeneous data sources. The data is brought into data lake (HDFS) from different sources and analyzed using different Hadoop tools like pig and hive.
Responsibilities:
⢠Installed and Configured Apache Hadoop tools like Hive, Pig, HBase and Sqoop for application development and unit testing.
⢠Wrote MapReduce jobs to discover trends in data usage by users.
⢠Involved in database connection using SQOOP.
⢠Involved in creating Hive tables, loading data and writing hive queries Using the HiveQL.
⢠Involved in partitioning and joining Hive tables for Hive query optimization.
⢠Experienced in SQL DB Migration to HDFS.
⢠Used NoSQL(HBase) for faster performance, which maintains the data in the De-Normalized way for OLTP.
⢠The data is collected from distributed sources into Avro models. Applied transformations and standardizations and loaded into HBase for further data processing.
⢠Experienced in defining job flows.
⢠Used Oozie to orchestrate the workflow.
⢠Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce
jobs given by the users.
⢠Exported the analyzed data to the relational databases using HIVE for visualization and to generate reports for the BI team.
Environment: Hadoop, Hive, Linux, MapReduce, HDFS, Hive, Python, Pig, Sqoop, Cloudera, Shell Scripting,
Java (JDK 1.6), Java 6, Oracle 10g, PL/SQL, SQL*PLUS |
Hadoop | Skill Set: Hadoop, Map Reduce, HDFS, Hive, Sqoop, java. Duration: 2016 to 2017. Role: Hadoop Developer Rplus offers an quick, simple and powerful cloud based Solution, Demand Sense to accurately predict demand for your product in all your markets which Combines Enterprise and External Data to predict demand more accurately through Uses Social Conversation and Sentiments to derive demand and Identifies significant drivers of sale out of hordes of factors that Selects the best suited model out of multiple forecasting models for each product. Responsibilities: ⢠Involved in deploying the product for customers, gathering requirements and algorithm optimization at backend of the product. ⢠Load and transform Large Datasets of structured semi structured. ⢠Responsible to manage data coming from different sources and application ⢠Supported Map Reduce Programs those are running on the cluster ⢠Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.Education Details
Hadoop Developer
Hadoop Developer - Braindatawire
Skill Details
APACHE HADOOP HDFS- Exprience - 49 months
APACHE HADOOP SQOOP- Exprience - 49 months
Hadoop- Exprience - 49 months
HADOOP- Exprience - 49 months
HADOOP DISTRIBUTED FILE SYSTEM- Exprience - 49 monthsCompany Details
company - Braindatawire
description - Technical Skills:
⢠Programming: Core Java, Map Reduce, Scala
⢠Hadoop Tools: HDFS, Spark, Map Reduce, Sqoop, Hive, Hbase
⢠Database: MySQL, Oracle
⢠Scripting: Shell Scripting
⢠IDE: Eclipse
⢠Operating Systems: Linux (CentOS), Windows
⢠Source Control: Git (Github) |
Hadoop | ⢠Operating systems:-Linux- Ubuntu, Windows 2007/08 ⢠Other tools:- Tableau, SVN, Beyond Compare.Education Details
January 2016 Bachelors of Engineering Engineering Gujarat Technological University
Systems Engineer/Hadoop Developer
Systems Engineer/Hadoop Developer - Tata Consultancy Services
Skill Details
Hadoop,Spark,Sqoop,Hive,Flume,Pig- Exprience - 24 monthsCompany Details
company - Tata Consultancy Services
description - Roles and responsibility:
Working for a American pharmaceutical company (one of the world's premier
biopharmaceutical) who develops and produces medicines and vaccines for a wide range of medical
disciplines, including immunology, oncology, cardiology, endocrinology, and neurology. To handle large
amount of United Healthcare data big data analytics is used. Data from all possible data sources like records of all Patients(Old and New), records of medicines, Treatment Pathways & Patient Journey for
Health Outcomes, Patient Finder (or Rare Disease Patient Finder), etc being gathered, stored and processed at one place.
⢠Worked on cluster with specs as:
o Cluster Architecture: Fully
Distributed Package Used:
CDH3
o Cluster Capacity: 20 TB
o No. of Nodes: 10 Data Nodes + 3 Masters + NFS Backup For NN
⢠Developed proof of concepts for enterprise adoption of Hadoop.
⢠Used SparkAPI over Cloudera Hadoop YARN to perform analytics on the Healthcare data in Cloudera
distribution.
⢠Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and trouble-shooting, manage and review data backups, and reviewing Hadoop log files.
⢠Imported & exported large data sets of data into HDFS and vice-versa using sqoop.
⢠Involved developing the Pig scripts and Hive Reports
⢠Worked on Hive partition and bucketing concepts and created hive external and Internal tables with Hive
partition.Monitoring Hadoop scripts which take the input from HDFS and load the data into Hive.
⢠Developed Spark scripts by using Scala shell commands as per the requirement and worked with both
Data frames/SQL/Data sets and RDD/MapReduce in Spark. Optimizing of existing algorithms in Hadoop
using SparkContext, Spark-SQL, Data Frames and RDD's.
⢠Collaborated with infrastructure, network, database, application and BI to ensure data, quality and availability.
⢠Developed reports using TABLEAU and exported data to HDFS and hive using Sqoop.
⢠Used ORC & Parquet file formats for serialization of data, and Snappy for the compression of the data.
Achievements
⢠Appreciation for showing articulate leadership qualities in doing work with the team.
⢠Completed the internal certification of TCS Certified Hadoop Developer.
Ongoing Learning
⢠Preparing and scheduled the Cloudera Certified Spark Developer CCA 175. |
Hadoop | Areas of expertise ⢠Big Data Ecosystems: Hadoop-HDFS, MapReduce, Hive, Pig, Sqoop, HBase Oozie, Spark, Pyspark, HUE and having knowledge on cassandra ⢠Programming Languages: Python, Core Java and have an idea on Scala ⢠Databases: Oracle 10g, MySQL, Sqlserver NoSQL - HBase, Cassandra ⢠Tools: Eclipse, Toad, FTP, Tectia, Putty, Autosys, Anaconda, Jupyter notebool and Devops - RTC, RLM. ⢠Scripting Languages: JSP ⢠Platforms: Windows, UnixEducation Details
M.Tech (IT-DBS) B.Tech (CSE) SRM University
Software Engineer
Software Engineer - Larsen and Toubro
Skill Details
Company Details
company - Larsen and Toubro
description - Worked as a Software Engineer in Technosoft Corporation, Chennai from Aug 2015 to sep 2016.
company - Current Project
description - Duration: September 2016 to Till date
Vendor: Citi bank
Description:
Citibank's (Citi) Anti-Money Laundering (AML) Transaction Monitoring (TM) program is a future state solution and a rules-based system for transaction monitoring of ICG-Markets business.
Roles and Responesbilities:
⢠Building and providing domain knowledge for Anti Money Laundering among team members.
⢠The layered architecture has Data Warehouse and Workspace layers which are used by Business Analysts.
⢠Actively involved in designing of star-schema model involving various Dimensions and Fact tables.
⢠Designed SCD2 for maintaining history of the DIM data.
⢠Developing Hive Queries for mapping data between different layers of architecture, and it's usage in Oozie Workflows.
⢠Integration with Data Quality and Reconciliation Module.
⢠Regression and Integration testing of solution for any issues in integration with other modules and effectively testing the data flow from layer-to-layer.
⢠Transaction monitoring system development to generate Alerts for the suspicious and fraudulent transactions based on requirements provide by BAs.
⢠Developing spark Jobs for various business rules.
⢠Learning "Machine Learning", which will be used further in the project for developing an effective model for Fraud detection for Anti Money Laundering system.
⢠Scheduling Jobs using Autosys tool.
⢠Deployment and Code Management using RTC and RLM(Release Lifecycle Management)
Hadoop Developer
# Current Project: PRTS - RAN
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Sqoop, HBase, Tez, Tableau, Sqlserver, Teradata
Cluster Size: 96 Node Cluster.
Distribution: Horton works - HDP2.3
company - Alcatel lucent
description - 1X) and Ruckus Wireless
Description:
The scope of this project is to maintain and store the operational and parameters data collected from the multiple vendors networks by the mediation team into the OMS data store and make it available for RF engineers to boost the network performance.
Responsibilities:
⢠Working with Hadoop Distributed File System.
⢠Involved in importing data from MySQL to HDFS using SQOOP.
⢠Involved in creating Hive tables, loading with data and writing hive queries which will run on top of Tez execution Engine.
⢠Involved in Preparing Test cases Document.
⢠Involved in Integrating Hive and HBase to store the operational data.
⢠Monitoring the Jobs through Oozie.
company - Current Project
description - Anti - Money laundering
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Oozie, Spark, Unix, Autosys, Python, RTC, RLM, ETL Framwe work
Cluster Size: 56 Node Cluster.
Distribution: Cloudera 5.9.14 |
Hadoop | Technical Skill Set: Programming Languages Apache Hadoop, Python, shell scripting, SQL Technologies Hive, Pig, Sqoop, Flume, Oozie, Impala, hdfs Tools Dataiku, Unravel, Cloudera, Putty, HUE, Cloudera Manager, Eclipse, Resource Manager Initial Learning Program: Tata Consultancy Services: June 2015 to August 2015 Description: This is a learning program conducted by TCS for the newly joined employees, to accomplish them to learn the working standard of the organization. During this period employee are groomed with various technical as well as ethical aspects. Education Details
B.E. Electronics & Communication Indore, Madhya Pradesh Medi-caps Institute of Technology & Management
Hadoop developer
hadoop,hive,sqoop,flume,pig,mapreduce,python,impala,spark,scala,sql,unix.
Skill Details
APACHE HADOOP SQOOP- Exprience - 31 months
Hadoop- Exprience - 31 months
HADOOP- Exprience - 31 months
Hive- Exprience - 31 months
SQOOP- Exprience - 31 months
python- Exprience - Less than 1 year months
hdfs- Exprience - Less than 1 year months
unix- Exprience - Less than 1 year months
impala- Exprience - Less than 1 year months
pig- Exprience - Less than 1 year months
unravel- Exprience - Less than 1 year months
mapreduce- Exprience - Less than 1 year months
dataiku- Exprience - Less than 1 year monthsCompany Details
company - Tata Consultancy Services
description - Project Description
Data warehouse division has multiple products for injecting, storing, analysing and presenting data. The Data Lake program is started to provide multi-talent, secure data hub to store application's data on Hadoop platform with strong data governance, lineage, auditing and monitoring capabilities. The object of the project is to provide necessary engineering support to analytics and application teams so that they can focus on the business logic development. In this project, the major task is to set up the Hadoop cluster and govern all the activities which are required for the smooth functioning of various Hadoop ecosystems. As the day and day data increasing so to provide stability to the ecosystem and smooth working of it, Developing and automating the various requirement specific utilities.
Responsibility 1. Developed proactive Health Check utility for Data Lake. The utility proactively checks the smooth functioning of all Hadoop components on the cluster and sends the result to email in HTML format. The utility is being used for daily Health Checks as well as after upgrades.
2. Getting the data in different formats and processing the data in Hadoop ecosystem after filtering the data using the appropriate techniques.
3. Developed data pipeline utility to ingest data from RDBMS database to Hive external tables using Sqoop commands. The utility also offers the data quality check like row count validation.
4. Developed and automated various cluster health check, usage, capacity related reports using Unix shell scripting.
5. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
6. Creating flume agents to process the data to Hadoop ecosystem side.
7. Performed benchmark testing on the Hive Queries and impala queries.
8. Involved in setting up the cluster and its components like edge node and HA implementation of the services: Hive Server2, Impala, and HDFS.
9. Filtering the required data from available data using different technologies like pig, regex Serde etc.
10. Dataiku benchmark testing on top of impala and hive in compare to Greenplum database.
11. Moving the data from Greenplum database to Hadoop side with help of Sqoop pipeline, process the data to Hadoop side and storing the data into hive tables to do the performance testing.
12. Dealing with the Hadoop ecosystem related issues in order to provide stability to WM Hadoop ecosystem.
13. Rescheduling of job from autosys job hosting to TWS job hosting for better performance.
Declaration:
I hereby declare that the above mentioned information is authentic to the best of my knowledge
company - Tata Consultancy Services
description - Clients: 1. Barclays 2. Union bank of California (UBC) 3. Morgan Stanley (MS)
KEY PROJECTS HANDLED
Project Name ABSA- Reconciliations, UBC and WMDATALAKE COE
company - Tata Consultancy Services
description - Project Description
Migration of data from RDBMS database to Hive (Hadoop ecosystem) . Hadoop platform ability with strong data governance, lineage, auditing and monitoring capabilities. The objective of this project was to speed up the data processing so that the analysis and decision making become easy. Due to RDBMS limitations to process waste amount of data at once and produce the results at the earliest, Client wanted to move the data to Hadoop ecosystem so that they can over-come from those limitations and focus on business improvement only.
Responsibility 1. Optimising the SQL queries for those data which were not required to move from RDBMS to any other platform.
2. Writing the Hive queries and logic to move the data from RDBMS to Hadoop ecosystem.
3. Writing the hive queries to analyse the required data as per the business requirements.
4. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
5. Writing the sqoop commands and scripts to move the data from RDBMS to Hadoop side.
company - Tata Consultancy Services
description - Project Description
Create recs and migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
Responsibility 1. Have worked on extracting business requirements, analyzing and implementing them in developing Recs 2. Worked on migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
3. Done the back end work where most of the things were related to writing the sql queries and provide the data for the new recs.
Project Name PSO |
Hadoop | Technical Skills Programming Languages: C, C++, Java, .Net., J2EE, HTML5, CSS, MapReduce Scripting Languages: Javascript, Python Databases: Oracle (PL-SQL), MY-SQL, IBM DB2 Tools:IBM Rational Rose, R, Weka Operating Systems: Windows XP, Vista, UNIX, Windows 7, Red Hat 7Education Details
January 2015 B.E Pimpri Chinchwad, MAHARASHTRA, IN Pimpri Chinchwad College of Engineering
January 2012 Diploma MSBTE Dnyanganaga Polytechnic
S.S.C New English School Takali
Hadoop/Big Data Developer
Hadoop/Big Data Developer - British Telecom
Skill Details
APACHE HADOOP MAPREDUCE- Exprience - 37 months
MapReduce- Exprience - 37 months
MAPREDUCE- Exprience - 37 months
JAVA- Exprience - 32 months
.NET- Exprience - 6 monthsCompany Details
company - British Telecom
description - Project: British Telecom project (UK)
Responsibilities:
⢠Working on HDFS, MapReduce, Hive, Spark, Scala, Sqoop, Kerberos etc. technologies
⢠Implemented various data mining algorithms on Spark like K-means clustering, Random forest, Naïve bayes etc.
⢠A knowledge of installing, configuring, maintaining and securing Hadoop.
company - DXC technology
description - HPE legacy), Bangalore
⢠Worked on Hadoop + Java programming
⢠Worked on Azure and AWS (EMR) services.
⢠Worked on HDInsight Hadoop cluster..
⢠Design, develop, document and architect Hadoop applications
⢠Develop MapReduce coding that works seamlessly on Hadoop clusters.
⢠Analyzing and processing the large data sets on HDFS.
⢠An analytical bent of mind and ability to learn-unlearn-relearn surely comes in handy. |
Hadoop | Technical Skill Set Big Data Ecosystems: Hadoop, HDFS, HBase, Map Reduce, Sqoop, Hive, Pig, Spark-Core, Flume. Other Language: Scala, Core-Java, SQL, PLSQL, Sell Scripting ETL Tools: Informatica Power Center8.x/9.6, Talend 5.6 Tools: Eclipse, Intellij Idea. Platforms: Windows Family, Linux /UNIX, Cloudera. Databases: MySQL, Oracle.10/11gEducation Details
M.C.A Pune, MAHARASHTRA, IN Pune University
Hodoop Developer
Hodoop Developer - PRGX India Private Limited Pune
Skill Details
Company Details
company - PRGX India Private Limited Pune
description - Team Size: 10+
Environment: Hive, Spark, Sqoop, Scala and Flume.
Project Description:
The bank wanted to help its customers to avail different products of the bank through analyzing their expenditure behavior. The customers spending ranges from online shopping, medical expenses in hospitals, cash transactions, and debit card usage etc. the behavior allows the bank to create an analytical report and based on which the bank used to display the product offers on the customer portal which was built using java. The portal allows the customers to login and see their transactions which they make on a day to day basis .the analytics also help the customers plan their budgets through the budget watch and my financial forecast applications embedded into the portal. The portal used hadoop framework to analyes the data as per the rules and regulations placed by the regulators from the respective countries. The offers and the interest rates also complied with the regulations and all these processing was done using the hadoop framework as big data analytics system.
Role & Responsibilities:
â Import data from legacy system to hadoop using Sqoop, flume.
â Implement the business logic to analyses the data
â Per-process data using spark.
â Create hive script and loading data into hive.
â Sourcing various attributes to the data processing logic to retrieve the correct results.
Project 2
company - PRGX India Private Limited Pune
description -
company - PRGX India Private Limited Pune
description - Team Size: 11+
Environment: Hadoop, HDFS, Hive, Sqoop, MySQL, Map Reduce
Project Description:-
The Purpose of this project is to store terabytes of information from the web application and extract meaningful information out of it.the solution was based on the open source s/w hadoop. The data will be stored in hadoop file system and processed using Map/Reduce jobs. Which in trun includes getting the raw html data from the micro websites, process the html to obtain product and user information, extract various reports out of the vistor tracking information and export the information for further processing
Role & Responsibilities:
â Move all crawl data flat files generated from various micro sites to HDFS for further processing.
â Sqoop implementation for interaction with database
â Write Map Reduce scripts to process the data file.
â Create hive tables to store the processed data in tabular formats.
â Reports creation from hive data.
Project 3
company - PRGX India Private Limited Pune
description - Team Size: 15+
Environment: Informatica 9.5, Oracle11g, UNIX
Project Description:
Pfizer Inc. is an American global pharmaceutical corporation headquartered in New York City. The main objective of the project is to build a Development Data Repository for Pfizer Inc. Because all the downstream application are like Etrack, TSP database, RTS, SADMS, GFS, GDO having their own sql request on the OLTP system directly due to which the performance of OLTP system goes slows down. For this we have created a Development Data Repository to replace the entire sql request directly on the OLTP system. DDR process extracts all clinical, pre-clinical, study, product, subject, sites related information from the upstream applications like EPECS, CDSS, RCM, PRC, E-CLINICAL, EDH and after applying some business logic put it into DDR core tables. From these snapshot and dimensional layer are created which are used for reporting application.
Role & Responsibilities:
â To understand & analyze the requirement documents and resolve the queries.
â To design Informatica mappings by using various basic transformations like Filter, Router, Source qualifier, Lookup etc and advance transformations like Aggregators, Joiner, Sorters and so on.
â Perform cross Unit and Integration testing for mappings developed within the team. Reporting bugs and bug fixing.
â Create workflow/batches and set the session dependencies.
â Implemented Change Data Capture using mapping parameters, SCD and SK generation.
â Developed Mapplet, reusable transformations to populate the data into data warehouse.
â Created Sessions & Worklets using workflow Manager to load the data into the Target Database.
â Involved in Unit Case Testing (UTC)
â Performing Unit Testing and UAT for SCD Type1/Type2, fact load and CDC implementation.
Personal Scan
Address: Jijayi Heights, Flat no 118, Narhe, (Police chowki) Pune- 411041 |
Hadoop | Education Details
Hadoop Developer
Hadoop Developer - INFOSYS
Skill Details
Company Details
company - INFOSYS
description - Project Description: The banking information had stored the data in different data ware house systems for each department but it becomes difficult for the organization to manage the data and to perform some analytics on the past data, so it is combined them into a single global repository in Hadoop for analysis.
Responsibilities:
⢠Analyze the banking rates data set.
⢠Create specification document.
⢠Provide effort estimation.
⢠Develop SPARK Scala, SPARK SQL Programs using Eclipse IDE on Windows/Linux environment.
⢠Create KPI's test scenarios, test cases, test result document.
⢠Test the Scala programs in Linux Spark Standalone mode.
⢠setup multi cluster on AWS, deploy the Spark Scala programs
⢠Provided solution using Hadoop ecosystem - HDFS, MapReduce, Pig, Hive, HBase, and Zookeeper.
⢠Provided solution using large scale server-side systems with distributed processing algorithms.
⢠Created reports for the BI team using Sqoop to export data into HDFS and Hive.
⢠Provided solution in supporting and assisting in troubleshooting and optimization of MapReduce jobs and
Pig Latin scripts.
⢠Deep understanding of Hadoop design principles, cluster connectivity, security and the factors that affect
system performance.
⢠Worked on Importing and exporting data from different databases like Oracle, Teradata into HDFS and Hive
using Sqoop, TPT and Connect Direct.
⢠Import and export the data from RDBMS to HDFS/HBASE
⢠Wrote script and placed it in client side so that the data moved to HDFS will be stored in temporary file and then it will start loading it in hive tables.
⢠Developed the Sqoop scripts in order to make the interaction between Pig and MySQL Database.
⢠Involved in developing the Hive Reports, Partitions of Hive tables.
⢠Created and maintained technical documentation for launching HADOOP Clusters and for executing HIVE
queries and PIG scripts.
⢠Involved in running Hadoop jobs for processing millions of records of text data
Environment: Java, Hadoop, HDFS, Map-Reduce, Pig, Hive, Sqoop, Flume, Oozie, HBase, Spark, Scala,
Linux, NoSQL, Storm, Tomcat, Putty, SVN, GitHub, IBM WebSphere v8.5.
Project #1: TELECOMMUNICATIONS
Hadoop Developer
Description To identify customers who are likely to churn and 360-degree view of the customer is created from different heterogeneous data sources. The data is brought into data lake (HDFS) from different sources and analyzed using different Hadoop tools like pig and hive.
Responsibilities:
⢠Installed and Configured Apache Hadoop tools like Hive, Pig, HBase and Sqoop for application development and unit testing.
⢠Wrote MapReduce jobs to discover trends in data usage by users.
⢠Involved in database connection using SQOOP.
⢠Involved in creating Hive tables, loading data and writing hive queries Using the HiveQL.
⢠Involved in partitioning and joining Hive tables for Hive query optimization.
⢠Experienced in SQL DB Migration to HDFS.
⢠Used NoSQL(HBase) for faster performance, which maintains the data in the De-Normalized way for OLTP.
⢠The data is collected from distributed sources into Avro models. Applied transformations and standardizations and loaded into HBase for further data processing.
⢠Experienced in defining job flows.
⢠Used Oozie to orchestrate the workflow.
⢠Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce
jobs given by the users.
⢠Exported the analyzed data to the relational databases using HIVE for visualization and to generate reports for the BI team.
Environment: Hadoop, Hive, Linux, MapReduce, HDFS, Hive, Python, Pig, Sqoop, Cloudera, Shell Scripting,
Java (JDK 1.6), Java 6, Oracle 10g, PL/SQL, SQL*PLUS |
Hadoop | Skill Set: Hadoop, Map Reduce, HDFS, Hive, Sqoop, java. Duration: 2016 to 2017. Role: Hadoop Developer Rplus offers an quick, simple and powerful cloud based Solution, Demand Sense to accurately predict demand for your product in all your markets which Combines Enterprise and External Data to predict demand more accurately through Uses Social Conversation and Sentiments to derive demand and Identifies significant drivers of sale out of hordes of factors that Selects the best suited model out of multiple forecasting models for each product. Responsibilities: ⢠Involved in deploying the product for customers, gathering requirements and algorithm optimization at backend of the product. ⢠Load and transform Large Datasets of structured semi structured. ⢠Responsible to manage data coming from different sources and application ⢠Supported Map Reduce Programs those are running on the cluster ⢠Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.Education Details
Hadoop Developer
Hadoop Developer - Braindatawire
Skill Details
APACHE HADOOP HDFS- Exprience - 49 months
APACHE HADOOP SQOOP- Exprience - 49 months
Hadoop- Exprience - 49 months
HADOOP- Exprience - 49 months
HADOOP DISTRIBUTED FILE SYSTEM- Exprience - 49 monthsCompany Details
company - Braindatawire
description - Technical Skills:
⢠Programming: Core Java, Map Reduce, Scala
⢠Hadoop Tools: HDFS, Spark, Map Reduce, Sqoop, Hive, Hbase
⢠Database: MySQL, Oracle
⢠Scripting: Shell Scripting
⢠IDE: Eclipse
⢠Operating Systems: Linux (CentOS), Windows
⢠Source Control: Git (Github) |
Hadoop | ⢠Operating systems:-Linux- Ubuntu, Windows 2007/08 ⢠Other tools:- Tableau, SVN, Beyond Compare.Education Details
January 2016 Bachelors of Engineering Engineering Gujarat Technological University
Systems Engineer/Hadoop Developer
Systems Engineer/Hadoop Developer - Tata Consultancy Services
Skill Details
Hadoop,Spark,Sqoop,Hive,Flume,Pig- Exprience - 24 monthsCompany Details
company - Tata Consultancy Services
description - Roles and responsibility:
Working for a American pharmaceutical company (one of the world's premier
biopharmaceutical) who develops and produces medicines and vaccines for a wide range of medical
disciplines, including immunology, oncology, cardiology, endocrinology, and neurology. To handle large
amount of United Healthcare data big data analytics is used. Data from all possible data sources like records of all Patients(Old and New), records of medicines, Treatment Pathways & Patient Journey for
Health Outcomes, Patient Finder (or Rare Disease Patient Finder), etc being gathered, stored and processed at one place.
⢠Worked on cluster with specs as:
o Cluster Architecture: Fully
Distributed Package Used:
CDH3
o Cluster Capacity: 20 TB
o No. of Nodes: 10 Data Nodes + 3 Masters + NFS Backup For NN
⢠Developed proof of concepts for enterprise adoption of Hadoop.
⢠Used SparkAPI over Cloudera Hadoop YARN to perform analytics on the Healthcare data in Cloudera
distribution.
⢠Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and trouble-shooting, manage and review data backups, and reviewing Hadoop log files.
⢠Imported & exported large data sets of data into HDFS and vice-versa using sqoop.
⢠Involved developing the Pig scripts and Hive Reports
⢠Worked on Hive partition and bucketing concepts and created hive external and Internal tables with Hive
partition.Monitoring Hadoop scripts which take the input from HDFS and load the data into Hive.
⢠Developed Spark scripts by using Scala shell commands as per the requirement and worked with both
Data frames/SQL/Data sets and RDD/MapReduce in Spark. Optimizing of existing algorithms in Hadoop
using SparkContext, Spark-SQL, Data Frames and RDD's.
⢠Collaborated with infrastructure, network, database, application and BI to ensure data, quality and availability.
⢠Developed reports using TABLEAU and exported data to HDFS and hive using Sqoop.
⢠Used ORC & Parquet file formats for serialization of data, and Snappy for the compression of the data.
Achievements
⢠Appreciation for showing articulate leadership qualities in doing work with the team.
⢠Completed the internal certification of TCS Certified Hadoop Developer.
Ongoing Learning
⢠Preparing and scheduled the Cloudera Certified Spark Developer CCA 175. |
Hadoop | Areas of expertise ⢠Big Data Ecosystems: Hadoop-HDFS, MapReduce, Hive, Pig, Sqoop, HBase Oozie, Spark, Pyspark, HUE and having knowledge on cassandra ⢠Programming Languages: Python, Core Java and have an idea on Scala ⢠Databases: Oracle 10g, MySQL, Sqlserver NoSQL - HBase, Cassandra ⢠Tools: Eclipse, Toad, FTP, Tectia, Putty, Autosys, Anaconda, Jupyter notebool and Devops - RTC, RLM. ⢠Scripting Languages: JSP ⢠Platforms: Windows, UnixEducation Details
M.Tech (IT-DBS) B.Tech (CSE) SRM University
Software Engineer
Software Engineer - Larsen and Toubro
Skill Details
Company Details
company - Larsen and Toubro
description - Worked as a Software Engineer in Technosoft Corporation, Chennai from Aug 2015 to sep 2016.
company - Current Project
description - Duration: September 2016 to Till date
Vendor: Citi bank
Description:
Citibank's (Citi) Anti-Money Laundering (AML) Transaction Monitoring (TM) program is a future state solution and a rules-based system for transaction monitoring of ICG-Markets business.
Roles and Responesbilities:
⢠Building and providing domain knowledge for Anti Money Laundering among team members.
⢠The layered architecture has Data Warehouse and Workspace layers which are used by Business Analysts.
⢠Actively involved in designing of star-schema model involving various Dimensions and Fact tables.
⢠Designed SCD2 for maintaining history of the DIM data.
⢠Developing Hive Queries for mapping data between different layers of architecture, and it's usage in Oozie Workflows.
⢠Integration with Data Quality and Reconciliation Module.
⢠Regression and Integration testing of solution for any issues in integration with other modules and effectively testing the data flow from layer-to-layer.
⢠Transaction monitoring system development to generate Alerts for the suspicious and fraudulent transactions based on requirements provide by BAs.
⢠Developing spark Jobs for various business rules.
⢠Learning "Machine Learning", which will be used further in the project for developing an effective model for Fraud detection for Anti Money Laundering system.
⢠Scheduling Jobs using Autosys tool.
⢠Deployment and Code Management using RTC and RLM(Release Lifecycle Management)
Hadoop Developer
# Current Project: PRTS - RAN
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Sqoop, HBase, Tez, Tableau, Sqlserver, Teradata
Cluster Size: 96 Node Cluster.
Distribution: Horton works - HDP2.3
company - Alcatel lucent
description - 1X) and Ruckus Wireless
Description:
The scope of this project is to maintain and store the operational and parameters data collected from the multiple vendors networks by the mediation team into the OMS data store and make it available for RF engineers to boost the network performance.
Responsibilities:
⢠Working with Hadoop Distributed File System.
⢠Involved in importing data from MySQL to HDFS using SQOOP.
⢠Involved in creating Hive tables, loading with data and writing hive queries which will run on top of Tez execution Engine.
⢠Involved in Preparing Test cases Document.
⢠Involved in Integrating Hive and HBase to store the operational data.
⢠Monitoring the Jobs through Oozie.
company - Current Project
description - Anti - Money laundering
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Oozie, Spark, Unix, Autosys, Python, RTC, RLM, ETL Framwe work
Cluster Size: 56 Node Cluster.
Distribution: Cloudera 5.9.14 |
Hadoop | Technical Skill Set: Programming Languages Apache Hadoop, Python, shell scripting, SQL Technologies Hive, Pig, Sqoop, Flume, Oozie, Impala, hdfs Tools Dataiku, Unravel, Cloudera, Putty, HUE, Cloudera Manager, Eclipse, Resource Manager Initial Learning Program: Tata Consultancy Services: June 2015 to August 2015 Description: This is a learning program conducted by TCS for the newly joined employees, to accomplish them to learn the working standard of the organization. During this period employee are groomed with various technical as well as ethical aspects. Education Details
B.E. Electronics & Communication Indore, Madhya Pradesh Medi-caps Institute of Technology & Management
Hadoop developer
hadoop,hive,sqoop,flume,pig,mapreduce,python,impala,spark,scala,sql,unix.
Skill Details
APACHE HADOOP SQOOP- Exprience - 31 months
Hadoop- Exprience - 31 months
HADOOP- Exprience - 31 months
Hive- Exprience - 31 months
SQOOP- Exprience - 31 months
python- Exprience - Less than 1 year months
hdfs- Exprience - Less than 1 year months
unix- Exprience - Less than 1 year months
impala- Exprience - Less than 1 year months
pig- Exprience - Less than 1 year months
unravel- Exprience - Less than 1 year months
mapreduce- Exprience - Less than 1 year months
dataiku- Exprience - Less than 1 year monthsCompany Details
company - Tata Consultancy Services
description - Project Description
Data warehouse division has multiple products for injecting, storing, analysing and presenting data. The Data Lake program is started to provide multi-talent, secure data hub to store application's data on Hadoop platform with strong data governance, lineage, auditing and monitoring capabilities. The object of the project is to provide necessary engineering support to analytics and application teams so that they can focus on the business logic development. In this project, the major task is to set up the Hadoop cluster and govern all the activities which are required for the smooth functioning of various Hadoop ecosystems. As the day and day data increasing so to provide stability to the ecosystem and smooth working of it, Developing and automating the various requirement specific utilities.
Responsibility 1. Developed proactive Health Check utility for Data Lake. The utility proactively checks the smooth functioning of all Hadoop components on the cluster and sends the result to email in HTML format. The utility is being used for daily Health Checks as well as after upgrades.
2. Getting the data in different formats and processing the data in Hadoop ecosystem after filtering the data using the appropriate techniques.
3. Developed data pipeline utility to ingest data from RDBMS database to Hive external tables using Sqoop commands. The utility also offers the data quality check like row count validation.
4. Developed and automated various cluster health check, usage, capacity related reports using Unix shell scripting.
5. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
6. Creating flume agents to process the data to Hadoop ecosystem side.
7. Performed benchmark testing on the Hive Queries and impala queries.
8. Involved in setting up the cluster and its components like edge node and HA implementation of the services: Hive Server2, Impala, and HDFS.
9. Filtering the required data from available data using different technologies like pig, regex Serde etc.
10. Dataiku benchmark testing on top of impala and hive in compare to Greenplum database.
11. Moving the data from Greenplum database to Hadoop side with help of Sqoop pipeline, process the data to Hadoop side and storing the data into hive tables to do the performance testing.
12. Dealing with the Hadoop ecosystem related issues in order to provide stability to WM Hadoop ecosystem.
13. Rescheduling of job from autosys job hosting to TWS job hosting for better performance.
Declaration:
I hereby declare that the above mentioned information is authentic to the best of my knowledge
company - Tata Consultancy Services
description - Clients: 1. Barclays 2. Union bank of California (UBC) 3. Morgan Stanley (MS)
KEY PROJECTS HANDLED
Project Name ABSA- Reconciliations, UBC and WMDATALAKE COE
company - Tata Consultancy Services
description - Project Description
Migration of data from RDBMS database to Hive (Hadoop ecosystem) . Hadoop platform ability with strong data governance, lineage, auditing and monitoring capabilities. The objective of this project was to speed up the data processing so that the analysis and decision making become easy. Due to RDBMS limitations to process waste amount of data at once and produce the results at the earliest, Client wanted to move the data to Hadoop ecosystem so that they can over-come from those limitations and focus on business improvement only.
Responsibility 1. Optimising the SQL queries for those data which were not required to move from RDBMS to any other platform.
2. Writing the Hive queries and logic to move the data from RDBMS to Hadoop ecosystem.
3. Writing the hive queries to analyse the required data as per the business requirements.
4. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
5. Writing the sqoop commands and scripts to move the data from RDBMS to Hadoop side.
company - Tata Consultancy Services
description - Project Description
Create recs and migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
Responsibility 1. Have worked on extracting business requirements, analyzing and implementing them in developing Recs 2. Worked on migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
3. Done the back end work where most of the things were related to writing the sql queries and provide the data for the new recs.
Project Name PSO |
Hadoop | Technical Skills Programming Languages: C, C++, Java, .Net., J2EE, HTML5, CSS, MapReduce Scripting Languages: Javascript, Python Databases: Oracle (PL-SQL), MY-SQL, IBM DB2 Tools:IBM Rational Rose, R, Weka Operating Systems: Windows XP, Vista, UNIX, Windows 7, Red Hat 7Education Details
January 2015 B.E Pimpri Chinchwad, MAHARASHTRA, IN Pimpri Chinchwad College of Engineering
January 2012 Diploma MSBTE Dnyanganaga Polytechnic
S.S.C New English School Takali
Hadoop/Big Data Developer
Hadoop/Big Data Developer - British Telecom
Skill Details
APACHE HADOOP MAPREDUCE- Exprience - 37 months
MapReduce- Exprience - 37 months
MAPREDUCE- Exprience - 37 months
JAVA- Exprience - 32 months
.NET- Exprience - 6 monthsCompany Details
company - British Telecom
description - Project: British Telecom project (UK)
Responsibilities:
⢠Working on HDFS, MapReduce, Hive, Spark, Scala, Sqoop, Kerberos etc. technologies
⢠Implemented various data mining algorithms on Spark like K-means clustering, Random forest, Naïve bayes etc.
⢠A knowledge of installing, configuring, maintaining and securing Hadoop.
company - DXC technology
description - HPE legacy), Bangalore
⢠Worked on Hadoop + Java programming
⢠Worked on Azure and AWS (EMR) services.
⢠Worked on HDInsight Hadoop cluster..
⢠Design, develop, document and architect Hadoop applications
⢠Develop MapReduce coding that works seamlessly on Hadoop clusters.
⢠Analyzing and processing the large data sets on HDFS.
⢠An analytical bent of mind and ability to learn-unlearn-relearn surely comes in handy. |
Hadoop | Technical Skill Set Big Data Ecosystems: Hadoop, HDFS, HBase, Map Reduce, Sqoop, Hive, Pig, Spark-Core, Flume. Other Language: Scala, Core-Java, SQL, PLSQL, Sell Scripting ETL Tools: Informatica Power Center8.x/9.6, Talend 5.6 Tools: Eclipse, Intellij Idea. Platforms: Windows Family, Linux /UNIX, Cloudera. Databases: MySQL, Oracle.10/11gEducation Details
M.C.A Pune, MAHARASHTRA, IN Pune University
Hodoop Developer
Hodoop Developer - PRGX India Private Limited Pune
Skill Details
Company Details
company - PRGX India Private Limited Pune
description - Team Size: 10+
Environment: Hive, Spark, Sqoop, Scala and Flume.
Project Description:
The bank wanted to help its customers to avail different products of the bank through analyzing their expenditure behavior. The customers spending ranges from online shopping, medical expenses in hospitals, cash transactions, and debit card usage etc. the behavior allows the bank to create an analytical report and based on which the bank used to display the product offers on the customer portal which was built using java. The portal allows the customers to login and see their transactions which they make on a day to day basis .the analytics also help the customers plan their budgets through the budget watch and my financial forecast applications embedded into the portal. The portal used hadoop framework to analyes the data as per the rules and regulations placed by the regulators from the respective countries. The offers and the interest rates also complied with the regulations and all these processing was done using the hadoop framework as big data analytics system.
Role & Responsibilities:
â Import data from legacy system to hadoop using Sqoop, flume.
â Implement the business logic to analyses the data
â Per-process data using spark.
â Create hive script and loading data into hive.
â Sourcing various attributes to the data processing logic to retrieve the correct results.
Project 2
company - PRGX India Private Limited Pune
description -
company - PRGX India Private Limited Pune
description - Team Size: 11+
Environment: Hadoop, HDFS, Hive, Sqoop, MySQL, Map Reduce
Project Description:-
The Purpose of this project is to store terabytes of information from the web application and extract meaningful information out of it.the solution was based on the open source s/w hadoop. The data will be stored in hadoop file system and processed using Map/Reduce jobs. Which in trun includes getting the raw html data from the micro websites, process the html to obtain product and user information, extract various reports out of the vistor tracking information and export the information for further processing
Role & Responsibilities:
â Move all crawl data flat files generated from various micro sites to HDFS for further processing.
â Sqoop implementation for interaction with database
â Write Map Reduce scripts to process the data file.
â Create hive tables to store the processed data in tabular formats.
â Reports creation from hive data.
Project 3
company - PRGX India Private Limited Pune
description - Team Size: 15+
Environment: Informatica 9.5, Oracle11g, UNIX
Project Description:
Pfizer Inc. is an American global pharmaceutical corporation headquartered in New York City. The main objective of the project is to build a Development Data Repository for Pfizer Inc. Because all the downstream application are like Etrack, TSP database, RTS, SADMS, GFS, GDO having their own sql request on the OLTP system directly due to which the performance of OLTP system goes slows down. For this we have created a Development Data Repository to replace the entire sql request directly on the OLTP system. DDR process extracts all clinical, pre-clinical, study, product, subject, sites related information from the upstream applications like EPECS, CDSS, RCM, PRC, E-CLINICAL, EDH and after applying some business logic put it into DDR core tables. From these snapshot and dimensional layer are created which are used for reporting application.
Role & Responsibilities:
â To understand & analyze the requirement documents and resolve the queries.
â To design Informatica mappings by using various basic transformations like Filter, Router, Source qualifier, Lookup etc and advance transformations like Aggregators, Joiner, Sorters and so on.
â Perform cross Unit and Integration testing for mappings developed within the team. Reporting bugs and bug fixing.
â Create workflow/batches and set the session dependencies.
â Implemented Change Data Capture using mapping parameters, SCD and SK generation.
â Developed Mapplet, reusable transformations to populate the data into data warehouse.
â Created Sessions & Worklets using workflow Manager to load the data into the Target Database.
â Involved in Unit Case Testing (UTC)
â Performing Unit Testing and UAT for SCD Type1/Type2, fact load and CDC implementation.
Personal Scan
Address: Jijayi Heights, Flat no 118, Narhe, (Police chowki) Pune- 411041 |
Hadoop | Education Details
Hadoop Developer
Hadoop Developer - INFOSYS
Skill Details
Company Details
company - INFOSYS
description - Project Description: The banking information had stored the data in different data ware house systems for each department but it becomes difficult for the organization to manage the data and to perform some analytics on the past data, so it is combined them into a single global repository in Hadoop for analysis.
Responsibilities:
⢠Analyze the banking rates data set.
⢠Create specification document.
⢠Provide effort estimation.
⢠Develop SPARK Scala, SPARK SQL Programs using Eclipse IDE on Windows/Linux environment.
⢠Create KPI's test scenarios, test cases, test result document.
⢠Test the Scala programs in Linux Spark Standalone mode.
⢠setup multi cluster on AWS, deploy the Spark Scala programs
⢠Provided solution using Hadoop ecosystem - HDFS, MapReduce, Pig, Hive, HBase, and Zookeeper.
⢠Provided solution using large scale server-side systems with distributed processing algorithms.
⢠Created reports for the BI team using Sqoop to export data into HDFS and Hive.
⢠Provided solution in supporting and assisting in troubleshooting and optimization of MapReduce jobs and
Pig Latin scripts.
⢠Deep understanding of Hadoop design principles, cluster connectivity, security and the factors that affect
system performance.
⢠Worked on Importing and exporting data from different databases like Oracle, Teradata into HDFS and Hive
using Sqoop, TPT and Connect Direct.
⢠Import and export the data from RDBMS to HDFS/HBASE
⢠Wrote script and placed it in client side so that the data moved to HDFS will be stored in temporary file and then it will start loading it in hive tables.
⢠Developed the Sqoop scripts in order to make the interaction between Pig and MySQL Database.
⢠Involved in developing the Hive Reports, Partitions of Hive tables.
⢠Created and maintained technical documentation for launching HADOOP Clusters and for executing HIVE
queries and PIG scripts.
⢠Involved in running Hadoop jobs for processing millions of records of text data
Environment: Java, Hadoop, HDFS, Map-Reduce, Pig, Hive, Sqoop, Flume, Oozie, HBase, Spark, Scala,
Linux, NoSQL, Storm, Tomcat, Putty, SVN, GitHub, IBM WebSphere v8.5.
Project #1: TELECOMMUNICATIONS
Hadoop Developer
Description To identify customers who are likely to churn and 360-degree view of the customer is created from different heterogeneous data sources. The data is brought into data lake (HDFS) from different sources and analyzed using different Hadoop tools like pig and hive.
Responsibilities:
⢠Installed and Configured Apache Hadoop tools like Hive, Pig, HBase and Sqoop for application development and unit testing.
⢠Wrote MapReduce jobs to discover trends in data usage by users.
⢠Involved in database connection using SQOOP.
⢠Involved in creating Hive tables, loading data and writing hive queries Using the HiveQL.
⢠Involved in partitioning and joining Hive tables for Hive query optimization.
⢠Experienced in SQL DB Migration to HDFS.
⢠Used NoSQL(HBase) for faster performance, which maintains the data in the De-Normalized way for OLTP.
⢠The data is collected from distributed sources into Avro models. Applied transformations and standardizations and loaded into HBase for further data processing.
⢠Experienced in defining job flows.
⢠Used Oozie to orchestrate the workflow.
⢠Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce
jobs given by the users.
⢠Exported the analyzed data to the relational databases using HIVE for visualization and to generate reports for the BI team.
Environment: Hadoop, Hive, Linux, MapReduce, HDFS, Hive, Python, Pig, Sqoop, Cloudera, Shell Scripting,
Java (JDK 1.6), Java 6, Oracle 10g, PL/SQL, SQL*PLUS |
Hadoop | Skill Set: Hadoop, Map Reduce, HDFS, Hive, Sqoop, java. Duration: 2016 to 2017. Role: Hadoop Developer Rplus offers an quick, simple and powerful cloud based Solution, Demand Sense to accurately predict demand for your product in all your markets which Combines Enterprise and External Data to predict demand more accurately through Uses Social Conversation and Sentiments to derive demand and Identifies significant drivers of sale out of hordes of factors that Selects the best suited model out of multiple forecasting models for each product. Responsibilities: ⢠Involved in deploying the product for customers, gathering requirements and algorithm optimization at backend of the product. ⢠Load and transform Large Datasets of structured semi structured. ⢠Responsible to manage data coming from different sources and application ⢠Supported Map Reduce Programs those are running on the cluster ⢠Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.Education Details
Hadoop Developer
Hadoop Developer - Braindatawire
Skill Details
APACHE HADOOP HDFS- Exprience - 49 months
APACHE HADOOP SQOOP- Exprience - 49 months
Hadoop- Exprience - 49 months
HADOOP- Exprience - 49 months
HADOOP DISTRIBUTED FILE SYSTEM- Exprience - 49 monthsCompany Details
company - Braindatawire
description - Technical Skills:
⢠Programming: Core Java, Map Reduce, Scala
⢠Hadoop Tools: HDFS, Spark, Map Reduce, Sqoop, Hive, Hbase
⢠Database: MySQL, Oracle
⢠Scripting: Shell Scripting
⢠IDE: Eclipse
⢠Operating Systems: Linux (CentOS), Windows
⢠Source Control: Git (Github) |
Hadoop | ⢠Operating systems:-Linux- Ubuntu, Windows 2007/08 ⢠Other tools:- Tableau, SVN, Beyond Compare.Education Details
January 2016 Bachelors of Engineering Engineering Gujarat Technological University
Systems Engineer/Hadoop Developer
Systems Engineer/Hadoop Developer - Tata Consultancy Services
Skill Details
Hadoop,Spark,Sqoop,Hive,Flume,Pig- Exprience - 24 monthsCompany Details
company - Tata Consultancy Services
description - Roles and responsibility:
Working for a American pharmaceutical company (one of the world's premier
biopharmaceutical) who develops and produces medicines and vaccines for a wide range of medical
disciplines, including immunology, oncology, cardiology, endocrinology, and neurology. To handle large
amount of United Healthcare data big data analytics is used. Data from all possible data sources like records of all Patients(Old and New), records of medicines, Treatment Pathways & Patient Journey for
Health Outcomes, Patient Finder (or Rare Disease Patient Finder), etc being gathered, stored and processed at one place.
⢠Worked on cluster with specs as:
o Cluster Architecture: Fully
Distributed Package Used:
CDH3
o Cluster Capacity: 20 TB
o No. of Nodes: 10 Data Nodes + 3 Masters + NFS Backup For NN
⢠Developed proof of concepts for enterprise adoption of Hadoop.
⢠Used SparkAPI over Cloudera Hadoop YARN to perform analytics on the Healthcare data in Cloudera
distribution.
⢠Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and trouble-shooting, manage and review data backups, and reviewing Hadoop log files.
⢠Imported & exported large data sets of data into HDFS and vice-versa using sqoop.
⢠Involved developing the Pig scripts and Hive Reports
⢠Worked on Hive partition and bucketing concepts and created hive external and Internal tables with Hive
partition.Monitoring Hadoop scripts which take the input from HDFS and load the data into Hive.
⢠Developed Spark scripts by using Scala shell commands as per the requirement and worked with both
Data frames/SQL/Data sets and RDD/MapReduce in Spark. Optimizing of existing algorithms in Hadoop
using SparkContext, Spark-SQL, Data Frames and RDD's.
⢠Collaborated with infrastructure, network, database, application and BI to ensure data, quality and availability.
⢠Developed reports using TABLEAU and exported data to HDFS and hive using Sqoop.
⢠Used ORC & Parquet file formats for serialization of data, and Snappy for the compression of the data.
Achievements
⢠Appreciation for showing articulate leadership qualities in doing work with the team.
⢠Completed the internal certification of TCS Certified Hadoop Developer.
Ongoing Learning
⢠Preparing and scheduled the Cloudera Certified Spark Developer CCA 175. |
Hadoop | Areas of expertise ⢠Big Data Ecosystems: Hadoop-HDFS, MapReduce, Hive, Pig, Sqoop, HBase Oozie, Spark, Pyspark, HUE and having knowledge on cassandra ⢠Programming Languages: Python, Core Java and have an idea on Scala ⢠Databases: Oracle 10g, MySQL, Sqlserver NoSQL - HBase, Cassandra ⢠Tools: Eclipse, Toad, FTP, Tectia, Putty, Autosys, Anaconda, Jupyter notebool and Devops - RTC, RLM. ⢠Scripting Languages: JSP ⢠Platforms: Windows, UnixEducation Details
M.Tech (IT-DBS) B.Tech (CSE) SRM University
Software Engineer
Software Engineer - Larsen and Toubro
Skill Details
Company Details
company - Larsen and Toubro
description - Worked as a Software Engineer in Technosoft Corporation, Chennai from Aug 2015 to sep 2016.
company - Current Project
description - Duration: September 2016 to Till date
Vendor: Citi bank
Description:
Citibank's (Citi) Anti-Money Laundering (AML) Transaction Monitoring (TM) program is a future state solution and a rules-based system for transaction monitoring of ICG-Markets business.
Roles and Responesbilities:
⢠Building and providing domain knowledge for Anti Money Laundering among team members.
⢠The layered architecture has Data Warehouse and Workspace layers which are used by Business Analysts.
⢠Actively involved in designing of star-schema model involving various Dimensions and Fact tables.
⢠Designed SCD2 for maintaining history of the DIM data.
⢠Developing Hive Queries for mapping data between different layers of architecture, and it's usage in Oozie Workflows.
⢠Integration with Data Quality and Reconciliation Module.
⢠Regression and Integration testing of solution for any issues in integration with other modules and effectively testing the data flow from layer-to-layer.
⢠Transaction monitoring system development to generate Alerts for the suspicious and fraudulent transactions based on requirements provide by BAs.
⢠Developing spark Jobs for various business rules.
⢠Learning "Machine Learning", which will be used further in the project for developing an effective model for Fraud detection for Anti Money Laundering system.
⢠Scheduling Jobs using Autosys tool.
⢠Deployment and Code Management using RTC and RLM(Release Lifecycle Management)
Hadoop Developer
# Current Project: PRTS - RAN
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Sqoop, HBase, Tez, Tableau, Sqlserver, Teradata
Cluster Size: 96 Node Cluster.
Distribution: Horton works - HDP2.3
company - Alcatel lucent
description - 1X) and Ruckus Wireless
Description:
The scope of this project is to maintain and store the operational and parameters data collected from the multiple vendors networks by the mediation team into the OMS data store and make it available for RF engineers to boost the network performance.
Responsibilities:
⢠Working with Hadoop Distributed File System.
⢠Involved in importing data from MySQL to HDFS using SQOOP.
⢠Involved in creating Hive tables, loading with data and writing hive queries which will run on top of Tez execution Engine.
⢠Involved in Preparing Test cases Document.
⢠Involved in Integrating Hive and HBase to store the operational data.
⢠Monitoring the Jobs through Oozie.
company - Current Project
description - Anti - Money laundering
Environment: Hadoop 2.x, HDFS, Yarn, Hive, Oozie, Spark, Unix, Autosys, Python, RTC, RLM, ETL Framwe work
Cluster Size: 56 Node Cluster.
Distribution: Cloudera 5.9.14 |
Hadoop | Technical Skill Set: Programming Languages Apache Hadoop, Python, shell scripting, SQL Technologies Hive, Pig, Sqoop, Flume, Oozie, Impala, hdfs Tools Dataiku, Unravel, Cloudera, Putty, HUE, Cloudera Manager, Eclipse, Resource Manager Initial Learning Program: Tata Consultancy Services: June 2015 to August 2015 Description: This is a learning program conducted by TCS for the newly joined employees, to accomplish them to learn the working standard of the organization. During this period employee are groomed with various technical as well as ethical aspects. Education Details
B.E. Electronics & Communication Indore, Madhya Pradesh Medi-caps Institute of Technology & Management
Hadoop developer
hadoop,hive,sqoop,flume,pig,mapreduce,python,impala,spark,scala,sql,unix.
Skill Details
APACHE HADOOP SQOOP- Exprience - 31 months
Hadoop- Exprience - 31 months
HADOOP- Exprience - 31 months
Hive- Exprience - 31 months
SQOOP- Exprience - 31 months
python- Exprience - Less than 1 year months
hdfs- Exprience - Less than 1 year months
unix- Exprience - Less than 1 year months
impala- Exprience - Less than 1 year months
pig- Exprience - Less than 1 year months
unravel- Exprience - Less than 1 year months
mapreduce- Exprience - Less than 1 year months
dataiku- Exprience - Less than 1 year monthsCompany Details
company - Tata Consultancy Services
description - Project Description
Data warehouse division has multiple products for injecting, storing, analysing and presenting data. The Data Lake program is started to provide multi-talent, secure data hub to store application's data on Hadoop platform with strong data governance, lineage, auditing and monitoring capabilities. The object of the project is to provide necessary engineering support to analytics and application teams so that they can focus on the business logic development. In this project, the major task is to set up the Hadoop cluster and govern all the activities which are required for the smooth functioning of various Hadoop ecosystems. As the day and day data increasing so to provide stability to the ecosystem and smooth working of it, Developing and automating the various requirement specific utilities.
Responsibility 1. Developed proactive Health Check utility for Data Lake. The utility proactively checks the smooth functioning of all Hadoop components on the cluster and sends the result to email in HTML format. The utility is being used for daily Health Checks as well as after upgrades.
2. Getting the data in different formats and processing the data in Hadoop ecosystem after filtering the data using the appropriate techniques.
3. Developed data pipeline utility to ingest data from RDBMS database to Hive external tables using Sqoop commands. The utility also offers the data quality check like row count validation.
4. Developed and automated various cluster health check, usage, capacity related reports using Unix shell scripting.
5. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
6. Creating flume agents to process the data to Hadoop ecosystem side.
7. Performed benchmark testing on the Hive Queries and impala queries.
8. Involved in setting up the cluster and its components like edge node and HA implementation of the services: Hive Server2, Impala, and HDFS.
9. Filtering the required data from available data using different technologies like pig, regex Serde etc.
10. Dataiku benchmark testing on top of impala and hive in compare to Greenplum database.
11. Moving the data from Greenplum database to Hadoop side with help of Sqoop pipeline, process the data to Hadoop side and storing the data into hive tables to do the performance testing.
12. Dealing with the Hadoop ecosystem related issues in order to provide stability to WM Hadoop ecosystem.
13. Rescheduling of job from autosys job hosting to TWS job hosting for better performance.
Declaration:
I hereby declare that the above mentioned information is authentic to the best of my knowledge
company - Tata Consultancy Services
description - Clients: 1. Barclays 2. Union bank of California (UBC) 3. Morgan Stanley (MS)
KEY PROJECTS HANDLED
Project Name ABSA- Reconciliations, UBC and WMDATALAKE COE
company - Tata Consultancy Services
description - Project Description
Migration of data from RDBMS database to Hive (Hadoop ecosystem) . Hadoop platform ability with strong data governance, lineage, auditing and monitoring capabilities. The objective of this project was to speed up the data processing so that the analysis and decision making become easy. Due to RDBMS limitations to process waste amount of data at once and produce the results at the earliest, Client wanted to move the data to Hadoop ecosystem so that they can over-come from those limitations and focus on business improvement only.
Responsibility 1. Optimising the SQL queries for those data which were not required to move from RDBMS to any other platform.
2. Writing the Hive queries and logic to move the data from RDBMS to Hadoop ecosystem.
3. Writing the hive queries to analyse the required data as per the business requirements.
4. Optimization of hive queries in order to increase the performance and minimize the Hadoop resource utilizations.
5. Writing the sqoop commands and scripts to move the data from RDBMS to Hadoop side.
company - Tata Consultancy Services
description - Project Description
Create recs and migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
Responsibility 1. Have worked on extracting business requirements, analyzing and implementing them in developing Recs 2. Worked on migrating static setup of reconciliations from 8.1 version to 9.1 version of the environment Intellimatch.
3. Done the back end work where most of the things were related to writing the sql queries and provide the data for the new recs.
Project Name PSO |
Hadoop | Technical Skills Programming Languages: C, C++, Java, .Net., J2EE, HTML5, CSS, MapReduce Scripting Languages: Javascript, Python Databases: Oracle (PL-SQL), MY-SQL, IBM DB2 Tools:IBM Rational Rose, R, Weka Operating Systems: Windows XP, Vista, UNIX, Windows 7, Red Hat 7Education Details
January 2015 B.E Pimpri Chinchwad, MAHARASHTRA, IN Pimpri Chinchwad College of Engineering
January 2012 Diploma MSBTE Dnyanganaga Polytechnic
S.S.C New English School Takali
Hadoop/Big Data Developer
Hadoop/Big Data Developer - British Telecom
Skill Details
APACHE HADOOP MAPREDUCE- Exprience - 37 months
MapReduce- Exprience - 37 months
MAPREDUCE- Exprience - 37 months
JAVA- Exprience - 32 months
.NET- Exprience - 6 monthsCompany Details
company - British Telecom
description - Project: British Telecom project (UK)
Responsibilities:
⢠Working on HDFS, MapReduce, Hive, Spark, Scala, Sqoop, Kerberos etc. technologies
⢠Implemented various data mining algorithms on Spark like K-means clustering, Random forest, Naïve bayes etc.
⢠A knowledge of installing, configuring, maintaining and securing Hadoop.
company - DXC technology
description - HPE legacy), Bangalore
⢠Worked on Hadoop + Java programming
⢠Worked on Azure and AWS (EMR) services.
⢠Worked on HDInsight Hadoop cluster..
⢠Design, develop, document and architect Hadoop applications
⢠Develop MapReduce coding that works seamlessly on Hadoop clusters.
⢠Analyzing and processing the large data sets on HDFS.
⢠An analytical bent of mind and ability to learn-unlearn-relearn surely comes in handy. |
Hadoop | Technical Skill Set Big Data Ecosystems: Hadoop, HDFS, HBase, Map Reduce, Sqoop, Hive, Pig, Spark-Core, Flume. Other Language: Scala, Core-Java, SQL, PLSQL, Sell Scripting ETL Tools: Informatica Power Center8.x/9.6, Talend 5.6 Tools: Eclipse, Intellij Idea. Platforms: Windows Family, Linux /UNIX, Cloudera. Databases: MySQL, Oracle.10/11gEducation Details
M.C.A Pune, MAHARASHTRA, IN Pune University
Hodoop Developer
Hodoop Developer - PRGX India Private Limited Pune
Skill Details
Company Details
company - PRGX India Private Limited Pune
description - Team Size: 10+
Environment: Hive, Spark, Sqoop, Scala and Flume.
Project Description:
The bank wanted to help its customers to avail different products of the bank through analyzing their expenditure behavior. The customers spending ranges from online shopping, medical expenses in hospitals, cash transactions, and debit card usage etc. the behavior allows the bank to create an analytical report and based on which the bank used to display the product offers on the customer portal which was built using java. The portal allows the customers to login and see their transactions which they make on a day to day basis .the analytics also help the customers plan their budgets through the budget watch and my financial forecast applications embedded into the portal. The portal used hadoop framework to analyes the data as per the rules and regulations placed by the regulators from the respective countries. The offers and the interest rates also complied with the regulations and all these processing was done using the hadoop framework as big data analytics system.
Role & Responsibilities:
â Import data from legacy system to hadoop using Sqoop, flume.
â Implement the business logic to analyses the data
â Per-process data using spark.
â Create hive script and loading data into hive.
â Sourcing various attributes to the data processing logic to retrieve the correct results.
Project 2
company - PRGX India Private Limited Pune
description -
company - PRGX India Private Limited Pune
description - Team Size: 11+
Environment: Hadoop, HDFS, Hive, Sqoop, MySQL, Map Reduce
Project Description:-
The Purpose of this project is to store terabytes of information from the web application and extract meaningful information out of it.the solution was based on the open source s/w hadoop. The data will be stored in hadoop file system and processed using Map/Reduce jobs. Which in trun includes getting the raw html data from the micro websites, process the html to obtain product and user information, extract various reports out of the vistor tracking information and export the information for further processing
Role & Responsibilities:
â Move all crawl data flat files generated from various micro sites to HDFS for further processing.
â Sqoop implementation for interaction with database
â Write Map Reduce scripts to process the data file.
â Create hive tables to store the processed data in tabular formats.
â Reports creation from hive data.
Project 3
company - PRGX India Private Limited Pune
description - Team Size: 15+
Environment: Informatica 9.5, Oracle11g, UNIX
Project Description:
Pfizer Inc. is an American global pharmaceutical corporation headquartered in New York City. The main objective of the project is to build a Development Data Repository for Pfizer Inc. Because all the downstream application are like Etrack, TSP database, RTS, SADMS, GFS, GDO having their own sql request on the OLTP system directly due to which the performance of OLTP system goes slows down. For this we have created a Development Data Repository to replace the entire sql request directly on the OLTP system. DDR process extracts all clinical, pre-clinical, study, product, subject, sites related information from the upstream applications like EPECS, CDSS, RCM, PRC, E-CLINICAL, EDH and after applying some business logic put it into DDR core tables. From these snapshot and dimensional layer are created which are used for reporting application.
Role & Responsibilities:
â To understand & analyze the requirement documents and resolve the queries.
â To design Informatica mappings by using various basic transformations like Filter, Router, Source qualifier, Lookup etc and advance transformations like Aggregators, Joiner, Sorters and so on.
â Perform cross Unit and Integration testing for mappings developed within the team. Reporting bugs and bug fixing.
â Create workflow/batches and set the session dependencies.
â Implemented Change Data Capture using mapping parameters, SCD and SK generation.
â Developed Mapplet, reusable transformations to populate the data into data warehouse.
â Created Sessions & Worklets using workflow Manager to load the data into the Target Database.
â Involved in Unit Case Testing (UTC)
â Performing Unit Testing and UAT for SCD Type1/Type2, fact load and CDC implementation.
Personal Scan
Address: Jijayi Heights, Flat no 118, Narhe, (Police chowki) Pune- 411041 |
ETL Developer | Technical Summary ⢠Knowledge of Informatica Power Center (ver. 9.1 and 10) ETL Tool: Mapping designing, usage of multiple transformations. Integration of various data source like SQL Server tables, Flat Files, etc. into target data warehouse. ⢠SQL/PLSQL working knowledge on Microsoft SQL server 2010. ⢠Unix Work Description: shell scripting, error debugging. ⢠Job scheduling using Autosys, Incident management and Change Requests through Service Now, JIRA, Agile Central ⢠Basic knowledge of Intellimatch (Reconciliation tool) Education Details
January 2010 to January 2014 BTech CSE Sangli, Maharashtra Walchand College of Engineering
October 2009 H.S.C Sangli, Maharashtra Willingdon College
August 2007 S.S.C Achievements Sangli, Maharashtra Martin's English School
ETL Developer
IT Analyst
Skill Details
ETL- Exprience - 48 months
EXTRACT, TRANSFORM, AND LOAD- Exprience - 48 months
INFORMATICA- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
RECONCILIATION- Exprience - 48 months
Jira- Exprience - 36 monthsCompany Details
company - Tata Consultancy Services
description - Project Details
Client/Project: Barclays UK London/HEXAD
Environment: Informatica (Power Center), SQL Server, UNIX, Autosys, Intellimatch.
Project Description:
The objective is to implement a strategic technical solution to support the governance and monitoring of break standards - including enhancements to audit capabilities. As a part of this program, the required remediation of source system data feeds involves consolidation of data into standardized feeds.
These remediated data feeds will be consumed by ETL layer. The reconciliation tool is
designed to source data from an ETL layer. The data from the Front and Back office systems,
together with static data must therefore be delivered to ETL. Here it will be pre-processed and delivered to reconciliation tool before the reconciliation process can be performed.
Role and Responsibilities:
⢠Responsible for analyzing, designing and developing ETL strategies and processes,
writing ETL specifications
⢠Requirement gathering
⢠Making functional documents and low level documents
⢠Developing and debugging the Informatica mappings to resolve bugs, and identify the causes of failures
⢠User interaction to identify the issues with the data loaded through the application
⢠Developed mappings using different transformations
company - Tata Consultancy Services
description - Project Details
Client/Project: Barclays UK London/HEXAD
Environment: Informatica (Power Center), SQL Server, UNIX, Autosys, Intellimatch.
Project Description:
The objective is to implement a strategic technical solution to support the governance and monitoring of break standards - including enhancements to audit capabilities. As a part of this program, the required remediation of source system data feeds involves consolidation of data into standardized feeds.
These remediated data feeds will be consumed by ETL layer. The reconciliation tool is
designed to source data from an ETL layer. The data from the Front and Back office systems,
together with static data must therefore be delivered to ETL. Here it will be pre-processed and delivered to reconciliation tool before the reconciliation process can be performed.
Role and Responsibilities:
⢠Responsible for analyzing, designing and developing ETL strategies and processes,
writing ETL specifications
⢠Requirement gathering
⢠Making functional documents and low level documents
⢠Developing and debugging the Informatica mappings to resolve bugs, and identify the causes of failures
⢠User interaction to identify the issues with the data loaded through the application
⢠Developed mappings using different transformations |
ETL Developer | TechnicalProficiencies DB: Oracle 11g Domains: Investment Banking, Advertising, Insurance. Programming Skills: SQL, PLSQL BI Tools: Informatica 9.1 OS: Windows, Unix Professional Development Trainings ⢠Concepts in Data Warehousing, Business Intelligence, ETL. ⢠BI Tools -Informatica 9X Education Details
BCA Nanded, Maharashtra Nanded University
ETL Developer
ETL Developer - Sun Trust Bank NY
Skill Details
ETL- Exprience - 39 months
EXTRACT, TRANSFORM, AND LOAD- Exprience - 39 months
INFORMATICA- Exprience - 39 months
ORACLE- Exprience - 39 months
UNIX- Exprience - 39 monthsCompany Details
company - Sun Trust Bank NY
description - Sun Trust Bank, NY JAN 2018 to present
Client: Sun Trust Bank NY
Environment: Informatica Power Center 9.1, Oracle 11g, unix.
Role: ETL Developer
Project Profile:
Sun Trust Bank is a US based multinational financial services holding company, headquarters in NY that operates the Bank in New York and other financial services investments. The company is organized as a stock corporation with four divisions: investment banking, private banking, Retail banking and a shared services group that provides
Financial services and support to the other divisions.
The objective of the first module was to create a DR system for the bank with a central point of communication and storage for Listed, Cash securities, Loans, Bonds, Notes, Equities, Rates, Commodities, and
FX asset classes.
Contribution / Highlights:
⢠Liaising closely with Project Manager, Business Analysts, Product Architects, and Requirements Modelers (CFOC) to define Technical requirements and create project documentation.
⢠Development using Infa 9.1, 11g/Oracle, UNIX.
⢠Use Informatica PowerCenter for extraction, transformation and loading (ETL) of data in the Database.
⢠Created and configured Sessions in Informatica workflow Manager for loading data into Data base tables from various heterogeneous database sources like Flat Files, Oracle etc.
⢠Unit testing and system integration testing of the developed mappings.
⢠Providing production Support of the deployed code.
⢠Providing solutions to the business for the Production issues.
⢠Had one to One interaction with the client throughout the project and in daily meetings.
Project #2
company - Marshall Multimedia
description - JUN 2016 to DEC 2017
Client: Marshall Multimedia
Environment: Informatica Power Center 9.1, Oracle 11g, unix.
Role: ETL Developer
Project Profile:
Marshall Multimedia is a US based multimedia advertisement services based organization which has
head courter in New York. EGC interface systems are advert management, Customer Management, Billing and
Provisioning Systems for Consumer& Enterprise Customers.
The main aim of the project was to create an enterprise data warehouse which would suffice the need of reports belonging to the following categories: Financial reports, management reports and
rejection reports. The professional reports were created by Cognos and ETL work was performed by
Informatica. This project is to load the advert details and magazine details coming in Relational tables into data warehouse and calculate the compensation and incentive amount monthly twice as per business
rules.
Contribution / Highlights:
⢠Developed mappings using different sources by using Informatica transformations.
⢠Created and configured Sessions in Informatica workflow Manager for loading data into Data Mart tables from various heterogeneous database sources like Flat Files, Oracle etc.
2
⢠Unit testing and system integration testing of the developed mappings.
⢠Providing solutions to the business for the Production issues.
Project #3
company - Assurant healthcare/Insurance Miami USA
description - Assurant, USA NOV 2015 to MAY 2016
Project: ACT BI - State Datamart
Client: Assurant healthcare/Insurance Miami USA
Environment: Informatica Power Center 9.1, Oracle 11g, unix.
Role: ETL Developer
Project Profile:
Assurant, Inc. is a holding company with businesses that provide a diverse set of specialty, niche-market insurance
products in the property, casualty, life and health insurance sectors. The company's four operating segments are Assurant
Employee Benefits, Assurant Health, Assurant Solutions and Assurant Specialty Property.
The project aim at building State Datamart for enterprise solution. I am part of team which is responsible for ETL
Design & development along with testing.
Contribution / Highlights:
⢠Performed small enhancement
⢠Daily load monitoring
⢠Attend to Informatica job failures by analyzing the root cause, resolving the failure using standard
documented process.
⢠Experience in writing SQL statements.
⢠Strong Problem Analysis & Resolution skills and ability to work in Multi Platform Environments
⢠Scheduled the Informatica jobs using Informatica scheduler
⢠Extensively used ETL methodology for developing and supporting data extraction, transformations and loading process, in a corporate-wide-ETL Solution using Informatica.
⢠Involved in creating the Unit cases and uploaded in to Quality Center for Unit Testing and UTR
⢠Ensure that daily support tasks are done in accordance with the defined SLA. |
ETL Developer | Education Details
January 2015 Bachelor of Engineering EXTC Mumbai, Maharashtra Mumbai University
January 2012 Diploma Industrial Electronics Vashi, MAHARASHTRA, IN Fr. Agnel Polytechnic
ETL Developer
ETL Developer
Skill Details
informatica- Exprience - 36 monthsCompany Details
company - Blue Shield of California
description - Duration: (Mar 2016 - Sept 2017)
Description:
Blue Shield of California (BSC) is health plan provider. The intent of this project is to process feeds coming in and going out of BSC system related to eligibility, enrollment, and claims subject areas. All these feeds comes in different formats and are processed using Informatica 9.6.1, Oracle 11g, Facets 5.0 &Tidal.
Technical environment: ETL tool (Informatica power Center 9.6.1), Oracle 11g (SQL, PL-SQL), UNIX, Facets, Tidal, JIRA, Putty.
Role: ETL Developer
Responsibilities: ⢠Responsible for analyzing the business requirement document ⢠Involved in development of Informatica mappings using different transformations like source qualifier, expression, filter, router, joiner, union, aggregator, normalizer, sorter, lookup and its corresponding sessions and workflows.
⢠Extensively used Informatica Debugger to figure out the problems in mapping and involved in troubleshooting the existing bugs.
⢠Writing Unix Scripts & SQL's as per the business requirement.
⢠Impact analysis of change requests & their development.
⢠Data fabrication using Facets screens as well as SQL statements in membership domain.
⢠Unit testing & trouble shooting using Informatica debugger, SQL query & preparation of Unit Test Cases.
⢠Prepare documents for design, unit testing and impact analysis.
Awards & Achievements ⢠Received Kudos Award at Syntel for contribution in error free work, commitment towards learning, client appreciation and outstanding display of Syntel values Received appreciation from Management for outstanding performance in complete tenure.
⢠Received spot recognition for automation done in project. |
ETL Developer | SKILL SET â Talend Big Data â Informatica Power center â Microsoft SQL Server â SQL Platform 6.2.1 Management Studio Workbench â AWS Services â Talend Administration Console â Microsoft Visual â Redshift (TAC) Studio â Athena â Data Warehouse Concept - Star â SQL â S3 Schema, Facts, Dimensions â Data Modeling - â Data Integration Microsoft Access Education Details
January 2012 to January 2016 BE Mumbai, Maharashtra University of Mumbai
January 2012 CBSE Technology Kochi, Kerala St. Francis
Talend ETL Developer
Talend ETL Developer - Tata Consultancy Services
Skill Details
DATA WAREHOUSE- Exprience - 23 months
DATABASE- Exprience - 20 months
INTEGRATION- Exprience - 20 months
INTEGRATOR- Exprience - 20 months
MS SQL SERVER- Exprience - 20 monthsCompany Details
company - Tata Consultancy Services
description - Prepared ETL mapping Documents for every mapping and Data Migration document for smooth transfer of project from development to testing environment and then to production environment. Performed Unit testing and System testing to
validate data loads in the target. Troubleshoot long running jobs and fixed the issues.
⢠Expertise in creating mappings in TALEND using Big Data supporting components such as tJDBCConfiguration, tJDBCInput,
tHDFSConfiguration, tS3configuration, tCacheOut, tCacheIn, tSqlRow and standard components like tFileInputDelimited,
tFileOutputDelimited, tMap, tJoin, tReplicate, tParallelize, tConvertType, tAggregate, tSortRow, tFlowMeter, tLogCatcher,
tRowGenerator, tJava, tJavarow, tAggregateRow, tFilter etc.
⢠Used ETL methodologies and best practices to create Talend ETL jobs. Followed and enhanced programming and naming
standards. Developed jobs, components and Joblets in Talend. Used tRunJob component to run child job from a parent job and to pass parameters from parent to child job.
⢠Created and deployed physical objects including custom tables, custom views, stored procedures, and indexes to SQL server for Staging and Data-Warehouse environment. Involved in writing SQL Queries and used Joins to access data from MySQL.
⢠Created and managed Source to Target mapping documents for all Facts and Dimension tables. Broad design, development and testing experience with Talend Integration Suite and knowledge in Performance Tuning of mappings.
⢠Extensively used tMap component which does lookup & Joiner Functions. Experienced in writing expressions within tmap as per the business need. Handled insert and update Strategy using tSQLRow.
⢠Created Implicit, local and global Context variables in the job to run Talend jobs against different environments.
⢠Worked on Talend Administration Console (TAC) for scheduling jobs and adding users. Experienced in Building a Talend job outside of a Talend studio as well as on TAC server.
⢠Developed mappings to load Fact and Dimension tables, SCD Type 1 and SCD Type 2 dimensions and Incremental loading and unit tested the mappings.
⢠Developed Framework Integrated Job which schedules multiple jobs at a time and updates the last successful run time,
success status, sending mail for failed jobs, maintaining the counts in SQL Database. Used tParalleize component and multi
thread execution option to run subjobs in parallel which increases the performance of a job.
⢠Created Talend jobs to copy the files from one server to another and utilized Talend FTP components. Implemented FTP
operations using Talend Studio to transfer files in between network folders as well as to FTP server using components like tFileList, tS3Put, tFTPut, tFileExist, tFTPConnection etc.
⢠Extracted data from flat files/ databases applied business logic to load them in the staging database as well as flat files.
⢠Successfully Loaded Data into different targets from various source systems like SQL Database, DB2, Flatfiles, XML files etc into the Staging table and then to the target database.
company - Tata Consultancy Services
description - Experience in development and design of ETL (Extract, Transform and Loading data) methodology for supporting data
transformations and processing, in a corporate wide ETL Solution using TALEND Big Data Platform.
⢠Excellent working experience in Agile methodologies.
⢠Proficiency in gathering and understanding the client requirements and translate business needs into technical
requirements.
⢠Design and develop end-to-end ETL process from various source systems to Staging area, from staging to Data Warehouse,
soliciting and documenting business, functional and data requirements, context/variable diagrams, use cases and ETL
related diagrams.
⢠Excellent oral/written communication with ability to effectively work with onsite and remote teams.
⢠A good team player with excellent problem solving ability and time management skills having profound insight to determine
priorities, schedule work and meet critical deadlines.
company - Tata Consultancy Services
description - Prepared ETL mapping Documents for every mapping and Data Migration document for smooth transfer of project from development to testing environment and then to production environment. Performed Unit testing and System testing to
validate data loads in the target. Troubleshoot long running jobs and fixed the issues. |
ETL Developer | Computer skills: - Yes. SQL knowledge-yes Unix knowledge-yes Data warehouse knowledge-yes Ab intio -yee MY HOBBIES: - ⢠Playing Cricket, football. ⢠Reading books ⢠Visiting new places/Travelling. DECLARATION:- I hereby declare that the above mentioned information is factual and correct up to the best of my knowledge and belief. Date: -.27.01.2019 MR. MANISH PRABHAKAR PATIL Place: -MUMBAI Education Details
June 2014 to June 2015 Bachelor's Electronics and Telecommunication A C Patil college of Engineering
January 2009 to January 2011 Engineering Navi Mumbai, Maharashtra Bharati vidyapeeth
January 2008 H.S.C. Mumbai, Maharashtra Khalsa college
ETL Informatica Developer
ETL DEVELOPER
Skill Details
ETL- Exprience - Less than 1 year months
Data Warehouse- Exprience - Less than 1 year months
Datastage- Exprience - Less than 1 year monthsCompany Details
company - Reliance Infocomm
description - I havevbeen working as ETL Developer in reliance industries in India for the past 3years.I have very good knowledge of Informatica and SQL as well as good knowledge of Unix.I am willing to work in yours company as Developer. |
ETL Developer | Technical Summary ⢠Knowledge of Informatica Power Center (ver. 9.1 and 10) ETL Tool: Mapping designing, usage of multiple transformations. Integration of various data source like SQL Server tables, Flat Files, etc. into target data warehouse. ⢠SQL/PLSQL working knowledge on Microsoft SQL server 2010. ⢠Unix Work Description: shell scripting, error debugging. ⢠Job scheduling using Autosys, Incident management and Change Requests through Service Now, JIRA, Agile Central ⢠Basic knowledge of Intellimatch (Reconciliation tool) Education Details
January 2010 to January 2014 BTech CSE Sangli, Maharashtra Walchand College of Engineering
October 2009 H.S.C Sangli, Maharashtra Willingdon College
August 2007 S.S.C Achievements Sangli, Maharashtra Martin's English School
ETL Developer
IT Analyst
Skill Details
ETL- Exprience - 48 months
EXTRACT, TRANSFORM, AND LOAD- Exprience - 48 months
INFORMATICA- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
RECONCILIATION- Exprience - 48 months
Jira- Exprience - 36 monthsCompany Details
company - Tata Consultancy Services
description - Project Details
Client/Project: Barclays UK London/HEXAD
Environment: Informatica (Power Center), SQL Server, UNIX, Autosys, Intellimatch.
Project Description:
The objective is to implement a strategic technical solution to support the governance and monitoring of break standards - including enhancements to audit capabilities. As a part of this program, the required remediation of source system data feeds involves consolidation of data into standardized feeds.
These remediated data feeds will be consumed by ETL layer. The reconciliation tool is
designed to source data from an ETL layer. The data from the Front and Back office systems,
together with static data must therefore be delivered to ETL. Here it will be pre-processed and delivered to reconciliation tool before the reconciliation process can be performed.
Role and Responsibilities:
⢠Responsible for analyzing, designing and developing ETL strategies and processes,
writing ETL specifications
⢠Requirement gathering
⢠Making functional documents and low level documents
⢠Developing and debugging the Informatica mappings to resolve bugs, and identify the causes of failures
⢠User interaction to identify the issues with the data loaded through the application
⢠Developed mappings using different transformations
company - Tata Consultancy Services
description - Project Details
Client/Project: Barclays UK London/HEXAD
Environment: Informatica (Power Center), SQL Server, UNIX, Autosys, Intellimatch.
Project Description:
The objective is to implement a strategic technical solution to support the governance and monitoring of break standards - including enhancements to audit capabilities. As a part of this program, the required remediation of source system data feeds involves consolidation of data into standardized feeds.
These remediated data feeds will be consumed by ETL layer. The reconciliation tool is
designed to source data from an ETL layer. The data from the Front and Back office systems,
together with static data must therefore be delivered to ETL. Here it will be pre-processed and delivered to reconciliation tool before the reconciliation process can be performed.
Role and Responsibilities:
⢠Responsible for analyzing, designing and developing ETL strategies and processes,
writing ETL specifications
⢠Requirement gathering
⢠Making functional documents and low level documents
⢠Developing and debugging the Informatica mappings to resolve bugs, and identify the causes of failures
⢠User interaction to identify the issues with the data loaded through the application
⢠Developed mappings using different transformations |
ETL Developer | TechnicalProficiencies DB: Oracle 11g Domains: Investment Banking, Advertising, Insurance. Programming Skills: SQL, PLSQL BI Tools: Informatica 9.1 OS: Windows, Unix Professional Development Trainings ⢠Concepts in Data Warehousing, Business Intelligence, ETL. ⢠BI Tools -Informatica 9X Education Details
BCA Nanded, Maharashtra Nanded University
ETL Developer
ETL Developer - Sun Trust Bank NY
Skill Details
ETL- Exprience - 39 months
EXTRACT, TRANSFORM, AND LOAD- Exprience - 39 months
INFORMATICA- Exprience - 39 months
ORACLE- Exprience - 39 months
UNIX- Exprience - 39 monthsCompany Details
company - Sun Trust Bank NY
description - Sun Trust Bank, NY JAN 2018 to present
Client: Sun Trust Bank NY
Environment: Informatica Power Center 9.1, Oracle 11g, unix.
Role: ETL Developer
Project Profile:
Sun Trust Bank is a US based multinational financial services holding company, headquarters in NY that operates the Bank in New York and other financial services investments. The company is organized as a stock corporation with four divisions: investment banking, private banking, Retail banking and a shared services group that provides
Financial services and support to the other divisions.
The objective of the first module was to create a DR system for the bank with a central point of communication and storage for Listed, Cash securities, Loans, Bonds, Notes, Equities, Rates, Commodities, and
FX asset classes.
Contribution / Highlights:
⢠Liaising closely with Project Manager, Business Analysts, Product Architects, and Requirements Modelers (CFOC) to define Technical requirements and create project documentation.
⢠Development using Infa 9.1, 11g/Oracle, UNIX.
⢠Use Informatica PowerCenter for extraction, transformation and loading (ETL) of data in the Database.
⢠Created and configured Sessions in Informatica workflow Manager for loading data into Data base tables from various heterogeneous database sources like Flat Files, Oracle etc.
⢠Unit testing and system integration testing of the developed mappings.
⢠Providing production Support of the deployed code.
⢠Providing solutions to the business for the Production issues.
⢠Had one to One interaction with the client throughout the project and in daily meetings.
Project #2
company - Marshall Multimedia
description - JUN 2016 to DEC 2017
Client: Marshall Multimedia
Environment: Informatica Power Center 9.1, Oracle 11g, unix.
Role: ETL Developer
Project Profile:
Marshall Multimedia is a US based multimedia advertisement services based organization which has
head courter in New York. EGC interface systems are advert management, Customer Management, Billing and
Provisioning Systems for Consumer& Enterprise Customers.
The main aim of the project was to create an enterprise data warehouse which would suffice the need of reports belonging to the following categories: Financial reports, management reports and
rejection reports. The professional reports were created by Cognos and ETL work was performed by
Informatica. This project is to load the advert details and magazine details coming in Relational tables into data warehouse and calculate the compensation and incentive amount monthly twice as per business
rules.
Contribution / Highlights:
⢠Developed mappings using different sources by using Informatica transformations.
⢠Created and configured Sessions in Informatica workflow Manager for loading data into Data Mart tables from various heterogeneous database sources like Flat Files, Oracle etc.
2
⢠Unit testing and system integration testing of the developed mappings.
⢠Providing solutions to the business for the Production issues.
Project #3
company - Assurant healthcare/Insurance Miami USA
description - Assurant, USA NOV 2015 to MAY 2016
Project: ACT BI - State Datamart
Client: Assurant healthcare/Insurance Miami USA
Environment: Informatica Power Center 9.1, Oracle 11g, unix.
Role: ETL Developer
Project Profile:
Assurant, Inc. is a holding company with businesses that provide a diverse set of specialty, niche-market insurance
products in the property, casualty, life and health insurance sectors. The company's four operating segments are Assurant
Employee Benefits, Assurant Health, Assurant Solutions and Assurant Specialty Property.
The project aim at building State Datamart for enterprise solution. I am part of team which is responsible for ETL
Design & development along with testing.
Contribution / Highlights:
⢠Performed small enhancement
⢠Daily load monitoring
⢠Attend to Informatica job failures by analyzing the root cause, resolving the failure using standard
documented process.
⢠Experience in writing SQL statements.
⢠Strong Problem Analysis & Resolution skills and ability to work in Multi Platform Environments
⢠Scheduled the Informatica jobs using Informatica scheduler
⢠Extensively used ETL methodology for developing and supporting data extraction, transformations and loading process, in a corporate-wide-ETL Solution using Informatica.
⢠Involved in creating the Unit cases and uploaded in to Quality Center for Unit Testing and UTR
⢠Ensure that daily support tasks are done in accordance with the defined SLA. |
ETL Developer | Education Details
January 2015 Bachelor of Engineering EXTC Mumbai, Maharashtra Mumbai University
January 2012 Diploma Industrial Electronics Vashi, MAHARASHTRA, IN Fr. Agnel Polytechnic
ETL Developer
ETL Developer
Skill Details
informatica- Exprience - 36 monthsCompany Details
company - Blue Shield of California
description - Duration: (Mar 2016 - Sept 2017)
Description:
Blue Shield of California (BSC) is health plan provider. The intent of this project is to process feeds coming in and going out of BSC system related to eligibility, enrollment, and claims subject areas. All these feeds comes in different formats and are processed using Informatica 9.6.1, Oracle 11g, Facets 5.0 &Tidal.
Technical environment: ETL tool (Informatica power Center 9.6.1), Oracle 11g (SQL, PL-SQL), UNIX, Facets, Tidal, JIRA, Putty.
Role: ETL Developer
Responsibilities: ⢠Responsible for analyzing the business requirement document ⢠Involved in development of Informatica mappings using different transformations like source qualifier, expression, filter, router, joiner, union, aggregator, normalizer, sorter, lookup and its corresponding sessions and workflows.
⢠Extensively used Informatica Debugger to figure out the problems in mapping and involved in troubleshooting the existing bugs.
⢠Writing Unix Scripts & SQL's as per the business requirement.
⢠Impact analysis of change requests & their development.
⢠Data fabrication using Facets screens as well as SQL statements in membership domain.
⢠Unit testing & trouble shooting using Informatica debugger, SQL query & preparation of Unit Test Cases.
⢠Prepare documents for design, unit testing and impact analysis.
Awards & Achievements ⢠Received Kudos Award at Syntel for contribution in error free work, commitment towards learning, client appreciation and outstanding display of Syntel values Received appreciation from Management for outstanding performance in complete tenure.
⢠Received spot recognition for automation done in project. |
ETL Developer | SKILL SET â Talend Big Data â Informatica Power center â Microsoft SQL Server â SQL Platform 6.2.1 Management Studio Workbench â AWS Services â Talend Administration Console â Microsoft Visual â Redshift (TAC) Studio â Athena â Data Warehouse Concept - Star â SQL â S3 Schema, Facts, Dimensions â Data Modeling - â Data Integration Microsoft Access Education Details
January 2012 to January 2016 BE Mumbai, Maharashtra University of Mumbai
January 2012 CBSE Technology Kochi, Kerala St. Francis
Talend ETL Developer
Talend ETL Developer - Tata Consultancy Services
Skill Details
DATA WAREHOUSE- Exprience - 23 months
DATABASE- Exprience - 20 months
INTEGRATION- Exprience - 20 months
INTEGRATOR- Exprience - 20 months
MS SQL SERVER- Exprience - 20 monthsCompany Details
company - Tata Consultancy Services
description - Prepared ETL mapping Documents for every mapping and Data Migration document for smooth transfer of project from development to testing environment and then to production environment. Performed Unit testing and System testing to
validate data loads in the target. Troubleshoot long running jobs and fixed the issues.
⢠Expertise in creating mappings in TALEND using Big Data supporting components such as tJDBCConfiguration, tJDBCInput,
tHDFSConfiguration, tS3configuration, tCacheOut, tCacheIn, tSqlRow and standard components like tFileInputDelimited,
tFileOutputDelimited, tMap, tJoin, tReplicate, tParallelize, tConvertType, tAggregate, tSortRow, tFlowMeter, tLogCatcher,
tRowGenerator, tJava, tJavarow, tAggregateRow, tFilter etc.
⢠Used ETL methodologies and best practices to create Talend ETL jobs. Followed and enhanced programming and naming
standards. Developed jobs, components and Joblets in Talend. Used tRunJob component to run child job from a parent job and to pass parameters from parent to child job.
⢠Created and deployed physical objects including custom tables, custom views, stored procedures, and indexes to SQL server for Staging and Data-Warehouse environment. Involved in writing SQL Queries and used Joins to access data from MySQL.
⢠Created and managed Source to Target mapping documents for all Facts and Dimension tables. Broad design, development and testing experience with Talend Integration Suite and knowledge in Performance Tuning of mappings.
⢠Extensively used tMap component which does lookup & Joiner Functions. Experienced in writing expressions within tmap as per the business need. Handled insert and update Strategy using tSQLRow.
⢠Created Implicit, local and global Context variables in the job to run Talend jobs against different environments.
⢠Worked on Talend Administration Console (TAC) for scheduling jobs and adding users. Experienced in Building a Talend job outside of a Talend studio as well as on TAC server.
⢠Developed mappings to load Fact and Dimension tables, SCD Type 1 and SCD Type 2 dimensions and Incremental loading and unit tested the mappings.
⢠Developed Framework Integrated Job which schedules multiple jobs at a time and updates the last successful run time,
success status, sending mail for failed jobs, maintaining the counts in SQL Database. Used tParalleize component and multi
thread execution option to run subjobs in parallel which increases the performance of a job.
⢠Created Talend jobs to copy the files from one server to another and utilized Talend FTP components. Implemented FTP
operations using Talend Studio to transfer files in between network folders as well as to FTP server using components like tFileList, tS3Put, tFTPut, tFileExist, tFTPConnection etc.
⢠Extracted data from flat files/ databases applied business logic to load them in the staging database as well as flat files.
⢠Successfully Loaded Data into different targets from various source systems like SQL Database, DB2, Flatfiles, XML files etc into the Staging table and then to the target database.
company - Tata Consultancy Services
description - Experience in development and design of ETL (Extract, Transform and Loading data) methodology for supporting data
transformations and processing, in a corporate wide ETL Solution using TALEND Big Data Platform.
⢠Excellent working experience in Agile methodologies.
⢠Proficiency in gathering and understanding the client requirements and translate business needs into technical
requirements.
⢠Design and develop end-to-end ETL process from various source systems to Staging area, from staging to Data Warehouse,
soliciting and documenting business, functional and data requirements, context/variable diagrams, use cases and ETL
related diagrams.
⢠Excellent oral/written communication with ability to effectively work with onsite and remote teams.
⢠A good team player with excellent problem solving ability and time management skills having profound insight to determine
priorities, schedule work and meet critical deadlines.
company - Tata Consultancy Services
description - Prepared ETL mapping Documents for every mapping and Data Migration document for smooth transfer of project from development to testing environment and then to production environment. Performed Unit testing and System testing to
validate data loads in the target. Troubleshoot long running jobs and fixed the issues. |
ETL Developer | Computer skills: - Yes. SQL knowledge-yes Unix knowledge-yes Data warehouse knowledge-yes Ab intio -yee MY HOBBIES: - ⢠Playing Cricket, football. ⢠Reading books ⢠Visiting new places/Travelling. DECLARATION:- I hereby declare that the above mentioned information is factual and correct up to the best of my knowledge and belief. Date: -.27.01.2019 MR. MANISH PRABHAKAR PATIL Place: -MUMBAI Education Details
June 2014 to June 2015 Bachelor's Electronics and Telecommunication A C Patil college of Engineering
January 2009 to January 2011 Engineering Navi Mumbai, Maharashtra Bharati vidyapeeth
January 2008 H.S.C. Mumbai, Maharashtra Khalsa college
ETL Informatica Developer
ETL DEVELOPER
Skill Details
ETL- Exprience - Less than 1 year months
Data Warehouse- Exprience - Less than 1 year months
Datastage- Exprience - Less than 1 year monthsCompany Details
company - Reliance Infocomm
description - I havevbeen working as ETL Developer in reliance industries in India for the past 3years.I have very good knowledge of Informatica and SQL as well as good knowledge of Unix.I am willing to work in yours company as Developer. |
ETL Developer | Technical Summary ⢠Knowledge of Informatica Power Center (ver. 9.1 and 10) ETL Tool: Mapping designing, usage of multiple transformations. Integration of various data source like SQL Server tables, Flat Files, etc. into target data warehouse. ⢠SQL/PLSQL working knowledge on Microsoft SQL server 2010. ⢠Unix Work Description: shell scripting, error debugging. ⢠Job scheduling using Autosys, Incident management and Change Requests through Service Now, JIRA, Agile Central ⢠Basic knowledge of Intellimatch (Reconciliation tool) Education Details
January 2010 to January 2014 BTech CSE Sangli, Maharashtra Walchand College of Engineering
October 2009 H.S.C Sangli, Maharashtra Willingdon College
August 2007 S.S.C Achievements Sangli, Maharashtra Martin's English School
ETL Developer
IT Analyst
Skill Details
ETL- Exprience - 48 months
EXTRACT, TRANSFORM, AND LOAD- Exprience - 48 months
INFORMATICA- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
RECONCILIATION- Exprience - 48 months
Jira- Exprience - 36 monthsCompany Details
company - Tata Consultancy Services
description - Project Details
Client/Project: Barclays UK London/HEXAD
Environment: Informatica (Power Center), SQL Server, UNIX, Autosys, Intellimatch.
Project Description:
The objective is to implement a strategic technical solution to support the governance and monitoring of break standards - including enhancements to audit capabilities. As a part of this program, the required remediation of source system data feeds involves consolidation of data into standardized feeds.
These remediated data feeds will be consumed by ETL layer. The reconciliation tool is
designed to source data from an ETL layer. The data from the Front and Back office systems,
together with static data must therefore be delivered to ETL. Here it will be pre-processed and delivered to reconciliation tool before the reconciliation process can be performed.
Role and Responsibilities:
⢠Responsible for analyzing, designing and developing ETL strategies and processes,
writing ETL specifications
⢠Requirement gathering
⢠Making functional documents and low level documents
⢠Developing and debugging the Informatica mappings to resolve bugs, and identify the causes of failures
⢠User interaction to identify the issues with the data loaded through the application
⢠Developed mappings using different transformations
company - Tata Consultancy Services
description - Project Details
Client/Project: Barclays UK London/HEXAD
Environment: Informatica (Power Center), SQL Server, UNIX, Autosys, Intellimatch.
Project Description:
The objective is to implement a strategic technical solution to support the governance and monitoring of break standards - including enhancements to audit capabilities. As a part of this program, the required remediation of source system data feeds involves consolidation of data into standardized feeds.
These remediated data feeds will be consumed by ETL layer. The reconciliation tool is
designed to source data from an ETL layer. The data from the Front and Back office systems,
together with static data must therefore be delivered to ETL. Here it will be pre-processed and delivered to reconciliation tool before the reconciliation process can be performed.
Role and Responsibilities:
⢠Responsible for analyzing, designing and developing ETL strategies and processes,
writing ETL specifications
⢠Requirement gathering
⢠Making functional documents and low level documents
⢠Developing and debugging the Informatica mappings to resolve bugs, and identify the causes of failures
⢠User interaction to identify the issues with the data loaded through the application
⢠Developed mappings using different transformations |
ETL Developer | TechnicalProficiencies DB: Oracle 11g Domains: Investment Banking, Advertising, Insurance. Programming Skills: SQL, PLSQL BI Tools: Informatica 9.1 OS: Windows, Unix Professional Development Trainings ⢠Concepts in Data Warehousing, Business Intelligence, ETL. ⢠BI Tools -Informatica 9X Education Details
BCA Nanded, Maharashtra Nanded University
ETL Developer
ETL Developer - Sun Trust Bank NY
Skill Details
ETL- Exprience - 39 months
EXTRACT, TRANSFORM, AND LOAD- Exprience - 39 months
INFORMATICA- Exprience - 39 months
ORACLE- Exprience - 39 months
UNIX- Exprience - 39 monthsCompany Details
company - Sun Trust Bank NY
description - Sun Trust Bank, NY JAN 2018 to present
Client: Sun Trust Bank NY
Environment: Informatica Power Center 9.1, Oracle 11g, unix.
Role: ETL Developer
Project Profile:
Sun Trust Bank is a US based multinational financial services holding company, headquarters in NY that operates the Bank in New York and other financial services investments. The company is organized as a stock corporation with four divisions: investment banking, private banking, Retail banking and a shared services group that provides
Financial services and support to the other divisions.
The objective of the first module was to create a DR system for the bank with a central point of communication and storage for Listed, Cash securities, Loans, Bonds, Notes, Equities, Rates, Commodities, and
FX asset classes.
Contribution / Highlights:
⢠Liaising closely with Project Manager, Business Analysts, Product Architects, and Requirements Modelers (CFOC) to define Technical requirements and create project documentation.
⢠Development using Infa 9.1, 11g/Oracle, UNIX.
⢠Use Informatica PowerCenter for extraction, transformation and loading (ETL) of data in the Database.
⢠Created and configured Sessions in Informatica workflow Manager for loading data into Data base tables from various heterogeneous database sources like Flat Files, Oracle etc.
⢠Unit testing and system integration testing of the developed mappings.
⢠Providing production Support of the deployed code.
⢠Providing solutions to the business for the Production issues.
⢠Had one to One interaction with the client throughout the project and in daily meetings.
Project #2
company - Marshall Multimedia
description - JUN 2016 to DEC 2017
Client: Marshall Multimedia
Environment: Informatica Power Center 9.1, Oracle 11g, unix.
Role: ETL Developer
Project Profile:
Marshall Multimedia is a US based multimedia advertisement services based organization which has
head courter in New York. EGC interface systems are advert management, Customer Management, Billing and
Provisioning Systems for Consumer& Enterprise Customers.
The main aim of the project was to create an enterprise data warehouse which would suffice the need of reports belonging to the following categories: Financial reports, management reports and
rejection reports. The professional reports were created by Cognos and ETL work was performed by
Informatica. This project is to load the advert details and magazine details coming in Relational tables into data warehouse and calculate the compensation and incentive amount monthly twice as per business
rules.
Contribution / Highlights:
⢠Developed mappings using different sources by using Informatica transformations.
⢠Created and configured Sessions in Informatica workflow Manager for loading data into Data Mart tables from various heterogeneous database sources like Flat Files, Oracle etc.
2
⢠Unit testing and system integration testing of the developed mappings.
⢠Providing solutions to the business for the Production issues.
Project #3
company - Assurant healthcare/Insurance Miami USA
description - Assurant, USA NOV 2015 to MAY 2016
Project: ACT BI - State Datamart
Client: Assurant healthcare/Insurance Miami USA
Environment: Informatica Power Center 9.1, Oracle 11g, unix.
Role: ETL Developer
Project Profile:
Assurant, Inc. is a holding company with businesses that provide a diverse set of specialty, niche-market insurance
products in the property, casualty, life and health insurance sectors. The company's four operating segments are Assurant
Employee Benefits, Assurant Health, Assurant Solutions and Assurant Specialty Property.
The project aim at building State Datamart for enterprise solution. I am part of team which is responsible for ETL
Design & development along with testing.
Contribution / Highlights:
⢠Performed small enhancement
⢠Daily load monitoring
⢠Attend to Informatica job failures by analyzing the root cause, resolving the failure using standard
documented process.
⢠Experience in writing SQL statements.
⢠Strong Problem Analysis & Resolution skills and ability to work in Multi Platform Environments
⢠Scheduled the Informatica jobs using Informatica scheduler
⢠Extensively used ETL methodology for developing and supporting data extraction, transformations and loading process, in a corporate-wide-ETL Solution using Informatica.
⢠Involved in creating the Unit cases and uploaded in to Quality Center for Unit Testing and UTR
⢠Ensure that daily support tasks are done in accordance with the defined SLA. |
ETL Developer | Education Details
January 2015 Bachelor of Engineering EXTC Mumbai, Maharashtra Mumbai University
January 2012 Diploma Industrial Electronics Vashi, MAHARASHTRA, IN Fr. Agnel Polytechnic
ETL Developer
ETL Developer
Skill Details
informatica- Exprience - 36 monthsCompany Details
company - Blue Shield of California
description - Duration: (Mar 2016 - Sept 2017)
Description:
Blue Shield of California (BSC) is health plan provider. The intent of this project is to process feeds coming in and going out of BSC system related to eligibility, enrollment, and claims subject areas. All these feeds comes in different formats and are processed using Informatica 9.6.1, Oracle 11g, Facets 5.0 &Tidal.
Technical environment: ETL tool (Informatica power Center 9.6.1), Oracle 11g (SQL, PL-SQL), UNIX, Facets, Tidal, JIRA, Putty.
Role: ETL Developer
Responsibilities: ⢠Responsible for analyzing the business requirement document ⢠Involved in development of Informatica mappings using different transformations like source qualifier, expression, filter, router, joiner, union, aggregator, normalizer, sorter, lookup and its corresponding sessions and workflows.
⢠Extensively used Informatica Debugger to figure out the problems in mapping and involved in troubleshooting the existing bugs.
⢠Writing Unix Scripts & SQL's as per the business requirement.
⢠Impact analysis of change requests & their development.
⢠Data fabrication using Facets screens as well as SQL statements in membership domain.
⢠Unit testing & trouble shooting using Informatica debugger, SQL query & preparation of Unit Test Cases.
⢠Prepare documents for design, unit testing and impact analysis.
Awards & Achievements ⢠Received Kudos Award at Syntel for contribution in error free work, commitment towards learning, client appreciation and outstanding display of Syntel values Received appreciation from Management for outstanding performance in complete tenure.
⢠Received spot recognition for automation done in project. |
ETL Developer | SKILL SET â Talend Big Data â Informatica Power center â Microsoft SQL Server â SQL Platform 6.2.1 Management Studio Workbench â AWS Services â Talend Administration Console â Microsoft Visual â Redshift (TAC) Studio â Athena â Data Warehouse Concept - Star â SQL â S3 Schema, Facts, Dimensions â Data Modeling - â Data Integration Microsoft Access Education Details
January 2012 to January 2016 BE Mumbai, Maharashtra University of Mumbai
January 2012 CBSE Technology Kochi, Kerala St. Francis
Talend ETL Developer
Talend ETL Developer - Tata Consultancy Services
Skill Details
DATA WAREHOUSE- Exprience - 23 months
DATABASE- Exprience - 20 months
INTEGRATION- Exprience - 20 months
INTEGRATOR- Exprience - 20 months
MS SQL SERVER- Exprience - 20 monthsCompany Details
company - Tata Consultancy Services
description - Prepared ETL mapping Documents for every mapping and Data Migration document for smooth transfer of project from development to testing environment and then to production environment. Performed Unit testing and System testing to
validate data loads in the target. Troubleshoot long running jobs and fixed the issues.
⢠Expertise in creating mappings in TALEND using Big Data supporting components such as tJDBCConfiguration, tJDBCInput,
tHDFSConfiguration, tS3configuration, tCacheOut, tCacheIn, tSqlRow and standard components like tFileInputDelimited,
tFileOutputDelimited, tMap, tJoin, tReplicate, tParallelize, tConvertType, tAggregate, tSortRow, tFlowMeter, tLogCatcher,
tRowGenerator, tJava, tJavarow, tAggregateRow, tFilter etc.
⢠Used ETL methodologies and best practices to create Talend ETL jobs. Followed and enhanced programming and naming
standards. Developed jobs, components and Joblets in Talend. Used tRunJob component to run child job from a parent job and to pass parameters from parent to child job.
⢠Created and deployed physical objects including custom tables, custom views, stored procedures, and indexes to SQL server for Staging and Data-Warehouse environment. Involved in writing SQL Queries and used Joins to access data from MySQL.
⢠Created and managed Source to Target mapping documents for all Facts and Dimension tables. Broad design, development and testing experience with Talend Integration Suite and knowledge in Performance Tuning of mappings.
⢠Extensively used tMap component which does lookup & Joiner Functions. Experienced in writing expressions within tmap as per the business need. Handled insert and update Strategy using tSQLRow.
⢠Created Implicit, local and global Context variables in the job to run Talend jobs against different environments.
⢠Worked on Talend Administration Console (TAC) for scheduling jobs and adding users. Experienced in Building a Talend job outside of a Talend studio as well as on TAC server.
⢠Developed mappings to load Fact and Dimension tables, SCD Type 1 and SCD Type 2 dimensions and Incremental loading and unit tested the mappings.
⢠Developed Framework Integrated Job which schedules multiple jobs at a time and updates the last successful run time,
success status, sending mail for failed jobs, maintaining the counts in SQL Database. Used tParalleize component and multi
thread execution option to run subjobs in parallel which increases the performance of a job.
⢠Created Talend jobs to copy the files from one server to another and utilized Talend FTP components. Implemented FTP
operations using Talend Studio to transfer files in between network folders as well as to FTP server using components like tFileList, tS3Put, tFTPut, tFileExist, tFTPConnection etc.
⢠Extracted data from flat files/ databases applied business logic to load them in the staging database as well as flat files.
⢠Successfully Loaded Data into different targets from various source systems like SQL Database, DB2, Flatfiles, XML files etc into the Staging table and then to the target database.
company - Tata Consultancy Services
description - Experience in development and design of ETL (Extract, Transform and Loading data) methodology for supporting data
transformations and processing, in a corporate wide ETL Solution using TALEND Big Data Platform.
⢠Excellent working experience in Agile methodologies.
⢠Proficiency in gathering and understanding the client requirements and translate business needs into technical
requirements.
⢠Design and develop end-to-end ETL process from various source systems to Staging area, from staging to Data Warehouse,
soliciting and documenting business, functional and data requirements, context/variable diagrams, use cases and ETL
related diagrams.
⢠Excellent oral/written communication with ability to effectively work with onsite and remote teams.
⢠A good team player with excellent problem solving ability and time management skills having profound insight to determine
priorities, schedule work and meet critical deadlines.
company - Tata Consultancy Services
description - Prepared ETL mapping Documents for every mapping and Data Migration document for smooth transfer of project from development to testing environment and then to production environment. Performed Unit testing and System testing to
validate data loads in the target. Troubleshoot long running jobs and fixed the issues. |
ETL Developer | Computer skills: - Yes. SQL knowledge-yes Unix knowledge-yes Data warehouse knowledge-yes Ab intio -yee MY HOBBIES: - ⢠Playing Cricket, football. ⢠Reading books ⢠Visiting new places/Travelling. DECLARATION:- I hereby declare that the above mentioned information is factual and correct up to the best of my knowledge and belief. Date: -.27.01.2019 MR. MANISH PRABHAKAR PATIL Place: -MUMBAI Education Details
June 2014 to June 2015 Bachelor's Electronics and Telecommunication A C Patil college of Engineering
January 2009 to January 2011 Engineering Navi Mumbai, Maharashtra Bharati vidyapeeth
January 2008 H.S.C. Mumbai, Maharashtra Khalsa college
ETL Informatica Developer
ETL DEVELOPER
Skill Details
ETL- Exprience - Less than 1 year months
Data Warehouse- Exprience - Less than 1 year months
Datastage- Exprience - Less than 1 year monthsCompany Details
company - Reliance Infocomm
description - I havevbeen working as ETL Developer in reliance industries in India for the past 3years.I have very good knowledge of Informatica and SQL as well as good knowledge of Unix.I am willing to work in yours company as Developer. |
ETL Developer | Technical Summary ⢠Knowledge of Informatica Power Center (ver. 9.1 and 10) ETL Tool: Mapping designing, usage of multiple transformations. Integration of various data source like SQL Server tables, Flat Files, etc. into target data warehouse. ⢠SQL/PLSQL working knowledge on Microsoft SQL server 2010. ⢠Unix Work Description: shell scripting, error debugging. ⢠Job scheduling using Autosys, Incident management and Change Requests through Service Now, JIRA, Agile Central ⢠Basic knowledge of Intellimatch (Reconciliation tool) Education Details
January 2010 to January 2014 BTech CSE Sangli, Maharashtra Walchand College of Engineering
October 2009 H.S.C Sangli, Maharashtra Willingdon College
August 2007 S.S.C Achievements Sangli, Maharashtra Martin's English School
ETL Developer
IT Analyst
Skill Details
ETL- Exprience - 48 months
EXTRACT, TRANSFORM, AND LOAD- Exprience - 48 months
INFORMATICA- Exprience - 48 months
MS SQL SERVER- Exprience - 48 months
RECONCILIATION- Exprience - 48 months
Jira- Exprience - 36 monthsCompany Details
company - Tata Consultancy Services
description - Project Details
Client/Project: Barclays UK London/HEXAD
Environment: Informatica (Power Center), SQL Server, UNIX, Autosys, Intellimatch.
Project Description:
The objective is to implement a strategic technical solution to support the governance and monitoring of break standards - including enhancements to audit capabilities. As a part of this program, the required remediation of source system data feeds involves consolidation of data into standardized feeds.
These remediated data feeds will be consumed by ETL layer. The reconciliation tool is
designed to source data from an ETL layer. The data from the Front and Back office systems,
together with static data must therefore be delivered to ETL. Here it will be pre-processed and delivered to reconciliation tool before the reconciliation process can be performed.
Role and Responsibilities:
⢠Responsible for analyzing, designing and developing ETL strategies and processes,
writing ETL specifications
⢠Requirement gathering
⢠Making functional documents and low level documents
⢠Developing and debugging the Informatica mappings to resolve bugs, and identify the causes of failures
⢠User interaction to identify the issues with the data loaded through the application
⢠Developed mappings using different transformations
company - Tata Consultancy Services
description - Project Details
Client/Project: Barclays UK London/HEXAD
Environment: Informatica (Power Center), SQL Server, UNIX, Autosys, Intellimatch.
Project Description:
The objective is to implement a strategic technical solution to support the governance and monitoring of break standards - including enhancements to audit capabilities. As a part of this program, the required remediation of source system data feeds involves consolidation of data into standardized feeds.
These remediated data feeds will be consumed by ETL layer. The reconciliation tool is
designed to source data from an ETL layer. The data from the Front and Back office systems,
together with static data must therefore be delivered to ETL. Here it will be pre-processed and delivered to reconciliation tool before the reconciliation process can be performed.
Role and Responsibilities:
⢠Responsible for analyzing, designing and developing ETL strategies and processes,
writing ETL specifications
⢠Requirement gathering
⢠Making functional documents and low level documents
⢠Developing and debugging the Informatica mappings to resolve bugs, and identify the causes of failures
⢠User interaction to identify the issues with the data loaded through the application
⢠Developed mappings using different transformations |