DevOps Engineer (DevOps + Bigdata) Jobs in Pune - Acquia
Job Description
DevOps Engineer w/ Big Data
Job description summary:
The DevOps is responsible for crafting and delivering secure and highly available solutions. You will be a critical part of a team passionate about ensuring our critical services are ready and stress tested. You should be comfortable taking on new challenges, defining potential solutions and implementing designs in a team environment. This position will be expected to both guide and support the team's growth and learning.
What You Will Accomplish
Job description summary:
The DevOps is responsible for crafting and delivering secure and highly available solutions. You will be a critical part of a team passionate about ensuring our critical services are ready and stress tested. You should be comfortable taking on new challenges, defining potential solutions and implementing designs in a team environment. This position will be expected to both guide and support the team's growth and learning.
What You Will Accomplish
- The DevOps partners closely with Engineering, Support, and OPS. We are responsible for the design, deployment, and continuous operation of the AgilOne platform.
- You will evolve our existing platform to the next level with CI/CD, automated diagnostics/scaling/healing, and more.
- You will work on a team responsible for a blend of architecture, automation, development, and application administration.
- You will build and deploy solutions from the infrastructure, to the network, and application layers, on public cloud platforms.
- You will ensure our SaaS platform is available and performing, and that we can notice problems before our customers.
- You will build the tools to improve speed, confidence and visibility of our SaaS deployments.
- You will help build security into every step of the software & infrastructure life cycle.
- You will collaborate with Support and Engineering on customer issues, as needed.
- Building and maintaining re-deployable cloud and on-premise infrastructure;
- Working with distributed data infrastructure, including containerization and virtualization tools, to enable unified engineering and production environments;
- Developing dashboards, monitors, and alerts to increase situational awareness of the state of our production issues/sla/security incidents.
- Independently conceiving and implementing ways to improve development efficiency, code reliability, and test fidelity.
- You will participate in a periodic on-call rotation
- Must have deploying, tuning, and maintaining Linux-based, highly available, fault-tolerant platforms in public cloud providers such as AWS, Azure, and GCP
- Must have in depth knowledge of big data technologies: hadoop, hdfs, hive, spark, kafka, yarn, zookeeper, etc
- Must be comfortable with common configuration management & orchestration tools. Experience with or ability to learn Ansible, and AWS/GCP services & APIs.
- Understanding sql queries and how they work.
- The ability to dig deep into infrastructure and code to tackle problems.
- A DevOps mentality.
- The drive to tackle traditional operations problems through automation.
- Familiarity with a modern programming language. Experience with or an ability to learn Go lang, Python and Linux shell scripting
- Enjoy learning new tools, and languages.
- Enjoy a collaborative environment.
- High attention to detail.
- Strong customer focus.
- An enthusiastic self-starter with a commitment to learning, customer empathy, and team communication.
- A Bachelors in Computer Science, Engineering, MIS, or experience in software engineering or a related field
- Experience with virtualization technologies, kubernetes, and docker, etc.
- Standard methodologies in infosec, SOC 2, HIPAA preferred.
- Familiarity with common monitoring, log aggregation and metrics capturing platforms (Nagios, Sensu, Splunk, Sumologic et al.)
- Hadoop/hive/hdfs/spark/kafka/yarn: +5 years
- Has previous built or was involved in building a CI/CD Pipeline: +5 years
- Continuous delivery/integration tools (Jenkins, Spinnaker, Artifactory): +5 years
- Hands-on Unix/Linux knowledge: +5 years
- Writing build scripts using Python, Terraform, Unix Shell (bash,ksh): +5 years
- DevOps and/or build & release experience including delivery: +5 years
- Automation/configuration management using Ansible: +3 years
- Software Configuration Management tools: +3 years
- DB/Data Platforms: Aurora/Mysql: +3 years
- AWS capabilities and architecture: +3 years
- Modern application monitoring tools: +2 years
Job Particulars
Role it software engineer
Who can apply Freshers and Experienced (0 to 3 Years )
Hiring Process Face to Face Interview
Employment Type0
Job Id1132974
Job Category IT/Software , BSc/BCA/BBM
Locality Address
State Maharashtra
Country India
About Company
Acquia
Jobs By Location
Pune
Chennai
Bangalore
Hyderabad
Kolkata
Noida
Delhi
Gurgaon
Ahmedabad
Mumbai
Others also searched for
Job & career videos Subscribe