• 03_0_Gabelseite_Karriere_2400

Data Engineer (m/f/d)

  • VNR Verlag für die Deutsche Wirtschaft AG
  • Bonn
  • Berufserfahrung (Junior Level)
  • Berufserfahrung
  • IT
  • Software Entwicklung
scheme image

Data Engineer (m/f/d)

Our mission within the growing VNR publishing group with 460 employees is the implementation of data-driven marketing through monitoring and automation. To fulfill this mission, the data department develops descriptive process KPIs (performance, metrics), represented by reporting and key figures for operational and strategic control. In addition to marketing automation, we use predictive analytics based on machine learning methods.

We are a cross-functional team with different orientations and levels of experience from the fields of business administration, marketing, physics and computer science. You will work with 10 data analysts, engineers and scientists. We also have a strong network within the various corporate divisions and rely on a customer-oriented development.
To get a better understanding what we are doing, we have described the transition to a serverless customer data platform together with AWS:

Your mission:

  • You will be part of our data engineering team for further developing our customer data platform.
  • You will support transforming our current services to infrastructure as code (IaC).
  • You will research for new technologies and check if we can improve the current tech stack.
  • You will work as an API between our Data Scientist and Data Analyst, to automate the data proceessing.
  • You will present your work on a regular base in front of the team and other IT departments.

Our techstack:
  • CI/CD Pipeline (GIT + AWS Codepipeline)
  • Datalake: S3, AWS Glue, EMR (Spark), Athena, Fargate, Stepfunctions, Lambda, API-Gateway, DynamoDB
  • Machine Learning: Sagemaker, Comprehend, EMR (Spark), Sklearn
  • Iac (AWS CDK)
  • Languages: Python 3x, TypeScript, Java Script, React/Vue
  • BI/Tools: Quicksight
  • UI: Amplify

Your skills:

  • You have hands-on experience with GIT- , SQL-(Presto-SQL-) and Python.
  • You can create use-case driven daten models and build according to these validation strategies.
  • You have also heard of test-driven development (TDD) and basic knowledge about (CI/CD) Pipelines.
  • You have at least worked on one AWS project (at university, your past job or privately) and serverless concepts are known to you.
  • You have already participated in building ETL processes (if possible already in AWS or with a framework like airflow).
  • You are eager to learn and educate yourself further with the help of the team, AWS certificates, research and additional recommended learning material.
  • It would be a bonus if you have first experience with cloud services in the AWS eco system (like RDS, S3, DynamoDB, Lambda, AWS CLI, Stepfunctions, Fargate, EMR).
  • Not mandatory but it would be an advantage if you already have experience with the Apache Spark Framework and AWS Glue.

Our offer:

  • We support your education in regards to AWS- and Data/BI-environment.
  • An additional monthly salary for your own educational use. 
  • You'll learn how to work with machine learning systems - pipelines, experiments, evaluation.
  • Flexible working hours and possibilities to work remote.  
  • Agility, test-driven and close to the customer development, as well as the possibility to work with cutting edge technologies.
  • Freedom to implement your own ideas within an open-minded and supporting culture. 
  • A dynamic team with qualified and friendly colleagues.  
  • Offsites for coding projects on a monthly base. 
  • A variety of benefits including free coffee, water and chill out areas, as well as a location close to the train station and free parking possibillities.  

This sounds interesting?

Do not hesitate to apply! Our team is looking forward to get to know you!
  • Adriana Zenleser
  • Recruiterin
  • 0228/8205-0