Data Engineers who can build scalable, reliable data pipelines and infrastructure, requiring mastery in SQL, Python/Scala, cloud platforms (AWS), and big data technologies (Spark, Kafka, Hadoop).
Key skills:
Data modeling
ETL/ELT pipeline construction
Data warehousing and orchestration tools like Airflow
Strong problem-solving, collaboration, and data governance skills are crucial
Knowledge on Manufacturing process is preferred.
Responsibilities:
Collect, process, and analyze large datasets to extract meaningful insights
Design and conduct experiments to test hypotheses and validate data-driven solutions
Collaborate with cross-functional teams to identify data needs and align strategies with organizational goals
Create templates, dashboards and visualizations to communicate findings effectively to stakeholders
Ensure data integrity and accuracy through cleaning, preprocessing, and validation techniques
Optimize data collection and storage processes for efficiency and scalability
.png)

