Data Engineer PySpark

0
341

What you'll do

  • You will implement and optimize data warehouses and ETL pipelines
  • You empower your colleagues and learn through exchange, training and project experience
  • You develop modern and highly automated data platforms with us for large customers in different industries and thus make an active contribution to digitalization
  • You integrate data from a wide range of data sources into a central data warehouse or data lake based on modern technologies or make it available via scalable and robust services and APIs
  • Together with software developers and data analysts, you generate valuable insights from large amounts of data
  • You responsibly manage the lifecycle of Big Data ecosystems (platform, integration, migration)

Who you are

  • You have knowledge of Python/PySpark
  • You are familiar with the concept of a data warehouse/DWH
  • You are familiar with Microsoft Azure Cloud and worked in Databricks
  • You have a sound knowledge of software development
  • You have experience in data structures and at least one database technology (e.g. MS SQL, PostgreSQL) and a good understanding of relational data modelling (e.g. RDBMS, SQL)
  • You are curious to always get to know a new technology and like to learn in exchange with your colleagues

Good to know

  • Personal and professional career development plan, education budget, paid certifications, language courses
  • Innovative projects with prestigious international customers
  • Competitive working conditions and full time employment
  • Flexible working schedule and possibility to work from home
  • Mentorship and onboarding program
  • Private health insurance, fit pass
  • Monthly team events to support team oriented culture
  • Referral program and jubilee awards
  • Minimum 22 days of vacation + extra days off
  • Refreshments, Fruits, Sweets, Snacks and lunch on Tuesdays
  • Cozy lounge room and terrace to relax and hang out

POSTAVI ODGOVOR

Please enter your comment!
Please enter your name here