Data Operations Engineer (Job Code : J42498A)  

 Job Summary
Share this job on Facebook  Share this job on Twitter  Share this job on Linked In
 
Experience:
6.00 - 9.00  Years 
 
Location:
Pune
 
Designation:
Data Operations Engineer
 
Degree:
BE-Comp/IT, BE-Other, BTech-Comp/IT, BTech-Other, MCA, ME-Comp/IT, ME-Other, MTech-Comp/IT, MTech-Other
 
Educational Level:
Undergraduate/Diploma
 
Stream of Study:
 
Industrial Type:
IT-Software/Software Services
 
Functional Area:
IT Software - DBA / Datawarehousing
 
Key Skills:
MySQL, ETL
 
Job Post Date:
2020-06-16 13:29:40  
 
 

 Company Description
 
Using embedded artificial intelligence refined by real time human insight, our client gives life sciences sales and marketing teams the information they need to improve the customer experience.

Our client equips life science companies with the decision support technology to quickly evaluate mountains of data, extracting only what’s relevant and valuable at the time of decision. With brand strategy as a starting point, our client`s software analyzes market data, channel activity and HCP preference to provide life science companies with the insights, clarity and guidance to deliver the right information at the right time to physicians and their patients.

More than half of the worlds top 20 pharmaceutical companies rely on our clients software to put complex data into context, coordinate channel activity and improve sales effectiveness significantly.

Headquartered in San Francisco, they also have offices in Philadelphia, London, Barcelona, Tokyo, Osaka, Shanghai, Beijing, Sydney, and Sao Paulo.
 

 Job Description
 
RESPONSIBILITIES
• Manage and organize fault-tolerant data pipelines
• Manage relational databases and data warehouses in AWS
• Support and optimize databases and data loading processes and improve operational efficiency
• Automate build, deployment, and operate ETL pipelines
• Implement facilities to monitor all aspects of data pipeline
REQUIRED SKILLS/EXPERIENCES
• 3-5 years of experience in managing MySQL
• 2-3 years of experience in ETL tools (eg. CopyStorm, Apache Gobblin)
• Experience in building or supporting distributed, scalable, and reliable data pipelines that ingest and process data at scale
• 2-3 years of experience in automating processes using configuration management software (eg. Rundeck, Ansible, Chef, Puppet)
• Experience in CI/CD tools