ID 2641 – Data Engineer

Job Category: Data
Job Type: Remote
Job Location: LATAM

We are the nearshore staff augmentation partner of choice for U.S.-based businesses. Same language, same time-zone, and the same commitment to excellence.

● 5+ years of experience in an Enterprise Data Management or Data Engineering role.
● 3+ of hands on experience in building metadata driven data pipelines using Azure Data Factory, Databricks / Spark for Cloud Datalake.
● 5+ years hands on experience with using one or more of the following for data analysis and wrangling Databricks, Python / PySpark, Jupyter Notebooks.
● Expert level SQL knowledge on databases such as but not limited to Snowflake, Netezza, Oracle, Sql Server, MySQL, Teradata.
● Experience working in a multi developer environment and hands on experience in using either azure devops or gitlab.
● Preferably experienced in SLA driven Production Data Pipeline or Quality support.
● Experience or strong understanding of the traditional enterprise ETL platforms such as IBM Datastage, Informatic, Pentaho, Ab Initio.
● Functional knowledge of some of the following technologies – Terraform, Azure CLI, PowerShell, Containerization (Kubernetes, Docker).
● Functional knowledge of one or more Reporting tools such as PowerBI, Tableau, OBIEE.
● Ability to implement Agile methodologies and work in an Agile DevOps environment.

Soft skills:
● Personal Attributes: self-starter, collaborative, curious, strong work ethic, highly motivated, team oriented.
● Team player with excellent communication skills, ability to communicate with the customer directly and able to explain the status of the deliverables in scrum calls.

Bonus skills:
● 3+ years of hands on experience on one or more of big data technologies such as Cloudera Hadoop, Pivotal, Vertica, MapR is a plus.

Great, just keep talking to your recruiter.

Apply for this position

If you are already talking to a recruiter from CONEXIONHR, DON'T FILL THE FORM.

Allowed Type(s): .pdf, .doc, .docx
es_ES en_US
💬 ¿Necesitas ayuda?