DataOps Engineer

DataOps Engineer

Experiência

--

Tipo de Emprego

Full-time

Posição

--

Oferta Salarial

Descrição da Oferta de Emprego

About YellowIpe

Our mission is to inspire the connection between technology and people, we foster the best of our professionals through our expertise in finding and attracting the best talent for the best projects. The Focus on People, Collaboration and Commitment are the pillars that guide us in this trajectory.

Join the yellow team as our new DataOps Engineer!

MAIN RESPONSABILITIES:

• Incident Response:
o Understand problems from a user perspective and communicate to clearly understand the issue.
o Reproduce bugs or issues that users are facing.
o Apply root cause analysis to quickly and efficiently
o Find the root cause of the problem, patch it, test it, and communicate with the end user.
o Write postmortems summarizing every step of resolution and helping the team to track all issues.
o Monitor existing flows and infrastructure and perform the same tasks when discovering bugs/issues through monitoring and alerting.

• Maintenance:
o Monitor flows and infrastructure to identify potential issues.
o Adapt configurations to keep flows and infrastructures working as expected, keeping the operations without incident.

• Database Optimization:
o Track costs and time of processing through dedicated dashboards.
o Alert people who query tables the wrong way, involving high costs.
o Track down jobs, views, and tables that are running inefficiently and occur either high costs or low speed of execution.
o Optimize jobs, queries, and tables to optimize both costs and speed of execution.

• Infrastructure Management:
o Manage infrastructure through Terraform.
o Share and propose good practices.
o Decommission useless infrastructures such as services, tables, or virtual machines.


• Deployments
o Track future deployments with a Data Architect and participate in Deployment Reviews.
o Share and propose good practices of deployment.
o Accompany Data Engineers during the entire process of deployments.
o Accompany Data Engineers in the following period of active monitoring.
o Ensure diligent application of deployment process, logging, and monitoring strategy.
o Take over newly deployed flows in the run process.


REQUESTED HARD SKILLS:

• Google Cloud Platform: General knowledge of the platform and various services, and at least one year of experience with GCP.
• Apache Airflow: At least two years of experience with the Airflow orchestrator, experience with Google Composer is a plus.
• Google BigQuery: Extensive experience (at least 4 years) with GBQ, know how to optimize tables and queries, and able to design database architecture.
• Terraform: At least two years of experience with Terraform, and know good practices of GitOps.
• Apache Spark: this is an optional expertise we would value. Some of our pipelines use pySpark.
• Additional Knowledge and Experience that are a Plus:
o Pub/Sub
o Kafka
o Azure Analysis Services
o Google Cloud Storage optimization

REQUESTED SOFT SKILLS:
• Organized and demonstrate the ability to stay organized when working on various topics at the same time.
• Demonstrated ability to manage your stress in an operational environment.
• Demonstrated clear communication in an operational environment.


Important informations:
Living in Portugal
Remote

Apply for this opportunity by sending your CV to: yellowipe.io/vacancies/935728513/