Sr Data Engineer / 100% Remote / USD payment
About the role:
In this role, you will be responsible for designing, developing, and maintaining scalable data pipelines, ensuring efficient data integration, and optimizing data storage solutions. You will collaborate closely with data scientists, analysts, and software engineers to build robust data architectures that support our business intelligence and analytics initiatives.
Key Responsibilities:
- Develop, test, and maintain data architectures, including databases, data lakes, and data warehouses.
- Design and implement scalable and reliable ETL/ELT pipelines to ingest, transform, and load data from various sources.
- Optimize and improve data processing workflows for performance, scalability, and cost-effectiveness.
- Ensure data integrity, consistency, and security across all data platforms.
- Collaborate with cross-functional teams to understand data needs and provide efficient solutions.
- Provide actionable insights for improving performance of relational databases.
- Monitor and troubleshoot data pipelines, ensuring timely resolution of issues.
- Implement best practices for data governance, metadata management, and documentation.
- Work with cloud-based data platforms (AWS, GCP, Azure) and leverage services such as S3, Redshift, BigQuery, Snowflake, Databricks, or similar technologies.
Required Qualifications:
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
5 years of experience in data engineering or a similar role.
Strong experience with Databricks.
Experience building batch and streaming data pipelines.
Proficiency in SQL and experience working with relational databases (PostgreSQL, MySQL, SQL Server).
Hands-on experience with ETL/ELT tools like Apache Airflow or similar tools.
Proficiency in Python.
Experience with cloud platforms (AWS, GCP, Azure) and related data services.
Knowledge of data modeling, warehousing concepts, and best practices.
Experience integrating data solutions into a REST API.
Understanding of CI/CD pipelines and DevOps practices for data infrastructure.
Experience with Snowflake or similar cloud-based data warehouse technology.
Strong problem-solving skills and ability to work in a fast-paced environment.
Preferred Qualifications:
Experience with NoSQL databases like DynamoDB or MongoDB.
Familiarity with data streaming technologies such as Apache Kafka or AWS Kinesis.
Experience working with containerized applications using Docker or Kubernetes.
Knowledge of machine learning model deployment and MLOps concepts.
Experience with machine learning.
- Locations
- BRAZIL
- Remote status
- Fully Remote

Already working at Bloom Talent Partners?
Let’s recruit together and find your next colleague.