DevOps Engineer (DevOps + Bigdata)
DevOps Engineer w/ Big Data
Acquia is the open digital experience company. We provide the world's most ambitious brands with products built around Drupal to allow them to embrace innovation and create customer moments that matter. At Acquia, we believe in the power of community and collaboration — giving our customers and partners the freedom to build tomorrow on their terms.
Headquartered in the U.S., we have been named one of North America’s fastest growing software companies by Deloitte and Inc. Magazine, rated a leader by the analyst community, named one of the Best Places to Work in India by Great Place to Work. We are Acquia. We are building for the future and we want you to be a part of it!
Job description summary:
The DevOps is responsible for crafting and delivering secure and highly available solutions. You will be a critical part of a team passionate about ensuring our critical services are ready and stress tested. You should be comfortable taking on new challenges, defining potential solutions and implementing designs in a team environment. This position will be expected to both guide and support the team’s growth and learning.
What You Will Accomplish
- The DevOps partners closely with Engineering, Support, and OPS. We are responsible for the design, deployment, and continuous operation of the AgilOne platform.
- You will evolve our existing platform to the next level with CI/CD, automated diagnostics/scaling/healing, and more.
- You will work on a team responsible for a blend of architecture, automation, development, and application administration.
- You will build and deploy solutions from the infrastructure, to the network, and application layers, on public cloud platforms.
- You will ensure our SaaS platform is available and performing, and that we can notice problems before our customers.
- You will build the tools to improve speed, confidence and visibility of our SaaS deployments.
- You will help build security into every step of the software & infrastructure life cycle.
- You will collaborate with Support and Engineering on customer issues, as needed.
- Building and maintaining re-deployable cloud and on-premise infrastructure;
- Working with distributed data infrastructure, including containerization and virtualization tools, to enable unified engineering and production environments;
- Developing dashboards, monitors, and alerts to increase situational awareness of the state of our production issues/sla/security incidents.
- Independently conceiving and implementing ways to improve development efficiency, code reliability, and test fidelity.
- You will participate in a periodic on-call rotation
- Must have deploying, tuning, and maintaining Linux-based, highly available, fault-tolerant platforms in public cloud providers such as AWS, Azure, and GCP
- Must have in depth knowledge of big data technologies: hadoop, hdfs, hive, spark, kafka, yarn, zookeeper, etc
- Must be comfortable with common configuration management & orchestration tools. Experience with or ability to learn Ansible, and AWS/GCP services & APIs.
- Understanding sql queries and how they work.
- The ability to dig deep into infrastructure and code to tackle problems.
- A DevOps mentality.
- The drive to tackle traditional operations problems through automation.
- Familiarity with a modern programming language. Experience with or an ability to learn Go lang, Python and Linux shell scripting
- Enjoy learning new tools, and languages.
- Enjoy a collaborative environment.
- High attention to detail.
- Strong customer focus.
- An enthusiastic self-starter with a commitment to learning, customer empathy, and team communication.
- A Bachelors in Computer Science, Engineering, MIS, or experience in software engineering or a related field
- Experience with virtualization technologies, kubernetes, and docker, etc.
- Standard methodologies in infosec, SOC 2, HIPAA preferred.
- Familiarity with common monitoring, log aggregation and metrics capturing platforms (Nagios, Sensu, Splunk, Sumologic et al.)
- Hadoop/hive/hdfs/spark/kafka/yarn: +5 years
- Has previous built or was involved in building a CI/CD Pipeline: +5 years
- Continuous delivery/integration tools (Jenkins, Spinnaker, Artifactory): +5 years
- Hands-on Unix/Linux knowledge: +5 years
- Writing build scripts using Python, Terraform, Unix Shell (bash,ksh): +5 years
- DevOps and/or build & release experience including delivery: +5 years
- Automation/configuration management using Ansible: +3 years
- Software Configuration Management tools: +3 years
- DB/Data Platforms: Aurora/Mysql: +3 years
- AWS capabilities and architecture: +3 years
- Modern application monitoring tools: +2 years
Individuals seeking employment at Acquia are considered without regard to race, color, religion, caste, creed, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, or sexual orientation. Whatever you answer will not be considered in the hiring process or thereafter.