Prime Holding JSC (Prime) is a leading software product innovation company and IT consultancy that offers guidance and actionable solutions to tech problems. We at Prime are passionate about applying the newest technology to solve the most challenging business problems of our clients worldwide. We understand what drives fast-growing companies forward and are excited to help them change the world.

We are looking for a Data Engineer in the area of Gene Therapy Data Sciences.

The role in a nutshell:

As a data engineer, you can expect a versatile position in a creative work environment, in which you support active portfolio projects in the laboratory by digitization and data modeling, as well as drive strategic initiatives in data sciences, data engineering, and IT/OT. In the open field of our current and future tasks, you have a wide range of opportunities to initiate your approaches and new directions.

Main tasks of the role:

  • You work as part of a cross-skilled team to understand, design and build data pipelines in order to maximize the value of data in the organization.
  • You make sure that data is well structured, described by means of metadata and made readily available to the whole organization (FAIR), treating data as a product – with machine/deep learning as a secondary step to build insights out of it
  • You assemble large, complex data sets that meet functional and non-functional business requirements.
  • You identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability.
  • You build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • You work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • You create data tools for analytics and data scientist team members that assist them in building and optimizing our products.
  • You help define standards and good practices in regard to Data Architecture.

Skills and Qualifications:

  • You are an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
  • You know how to design, build, scale and optimize data-driven solutions.
  • You have experience in building big-data processing pipelines (e.g. with Apache Spark, Apache Kafka, Mulesoft ESB, etc.)
  • You have experience in different types of data stores (OLTP and OLAP with SQL, ElasticSearch, document and graph databases) and data modeling.
  • You have experience and/or are eager to work with modern architectures (e.g Event-driven architecture, DataMesh).
  • You are open to new ideas and question conventional thought patterns and work processes.
  • You value working in a performance driven environment, fueled by mutual respect, discussion, and collaboration.
  • Your fluent English allows you to work efficiently in a global environment.

What we offer:

  • Competitive remuneration package
  • Health insurance – VIP package
  • Food vouchers and corporate discounts
  • Work-life balance: 25 days paid vacation + work from home policy
  • Designated budget for home office equipment
  • Bonuses for special occasions
  • Access to technical books library, both off- and on-line
  • Internal trainings and team-buildings
  • Challenging projects offering the opportunity to work on world-class products

So, if you’re looking for a role where you will be challenged to learn, grow, and have a high impact and aspire to become a multi-faceted professional that can be utilized in fast-paced environments to optimize processes, systems, and overall business strategy, you seem like someone we’d like to meet.

POSTAVI ODGOVOR

Please enter your comment!
Please enter your name here