Sr Data Engineer / BRAZIL ONLY/ Remote / USD payment
About the role:
In this role, you will be responsible for designing, developing, and maintaining highly scalable and reliable data pipelines, ensuring efficient data integration, and optimizing data storage solutions, supporting millions of users by integrating new platform APIs. You will collaborate closely with data scientists, analysts, and software engineers to build robust data architectures that support our business intelligence and analytics initiatives.
As RHEI is a high growth company, you should enjoy working in an entrepreneurial, high change environment. RHEI has a remote work model which offers the best work life balance.
Key Responsibilities:
● Develop, test, and maintain data architectures, including databases, data lakes, and data warehouses.
● Design and implement scalable and reliable ETL/ELT pipelines to ingest, transform, and load data from various sources.
● Lead technical integration efforts to connect new external platforms and data sources in highly scalable fashions.
● Optimize and improve data processing workflows for performance, scalability, and cost-effectiveness.
● Ensure data integrity, consistency, and security across all data platforms.
● Collaborate with cross-functional teams to understand data needs and provide efficient
solutions.
● Provide actionable insights for improving performance of relational databases.
● Monitor and troubleshoot data pipelines, ensuring timely resolution of issues.
● Implement best practices for data governance, metadata management, and documentation.
● Work with cloud-based data platforms and leverage services such as S3, Redshift,
Snowflake, or similar technologies.
Key Requirements:
● Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
● 5 years of experience in data engineering or a similar role.
● Experience with Snowflake.
● Proficiency in Python.
● Proficiency in SQL and experience working with relational databases (PostgreSQL, MySQL,
SQL Server)
● Proven experience in data pipeline development and designing large-scale high throughput
data integration systems
● Hands-on experience with ETL/ELT tools like Apache Airflow or similar tools.
● Experience with AWS and related data services.
● Knowledge of data modeling, warehousing concepts, and best practices.
● Experience integrating data solutions into a REST API.
● Understanding of CI/CD pipelines and DevOps practices for data infrastructure.
● Strong problem-solving skills and ability to work in a fast-paced environment.
Plus Requirements:
● Experience with NoSQL databases like DynamoDB or MongoDB.
● Familiarity with data streaming technologies such as Apache Kafka or AWS Kinesis.
● Experience working with containerized applications using Docker or Kubernetes.
● Knowledge of machine learning model deployment and MLOps concepts.
● Experience with machine learning.
- Locations
- BRAZIL
- Remote status
- Fully Remote
Already working at Bloom Talent Partners?
Let’s recruit together and find your next colleague.