Design and implement the data NextGen platform architecture with modern technologies.
Build scalable data pipelines integrating datasets from various sources.
Evaluate and recommend technologies, tools, and frameworks.
Manage the technical backlog on the data NextGen Platform.
Collaborate with data scientists, analysts, and business stakeholders.
Drive automation initiatives for data platform deployment and maintenance.
Grow our data team with exceptional engineers.
Requirements: Strong programming skills in Python /Scala ( JAVA is a plus).
Solid engineering foundations and architectural design skills.
Experience with cloud-scalable, Real-Time, high-performance data Lake solutions.
Expert knowledge of Databricks, Spark, Delta, and related features.
Experience with Databricks operational tasks like job cluster optimization.
3+ years in large-scale data engineering.
2+ years developing solutions within Cloud Services (Azure, AWS, GCP).
Experience with data streams processing (Kafka, Spark Streaming).
Knowledge of ETL tools (Rivery, Fivetran, Stitch), pipelines (DBT, Airflow), and reverse ETL.
Strong SQL skills for advanced scenarios.
AI/ML/MLOps/LLM experience is a plus.
Cloud/Databricks certifications are a big plus.
Excellent communication and teamwork skills.
.המשרה מיועדת לנשים ולגברים כאחד