Internship Data Engineer


Published on December ⚠, ⚠

Job Description

L&C Connect is accelerating its digital transformation with a mission to become a digital sports platform that allows customers to discover our world through a variety of sports-centric experiences. Our digital teams in Lille, K&V Solutions, London, Montreal and more than ⚠,⚠ employees work together to develop and advance digital products, always aiming to provide the best value to users.

To strengthen our team in Belgium, we are looking for a Digital Analyst to work within a passionate, ambitious and enthusiastic team in data and artificial intelligence. As a Digital Analyst at L&C Connect, you will work closely with the marketing, e-commerce and digital teams to provide strategic insights that drive business decisions and optimize results.

Your Responsibilities

  • You will contribute to existing ETL pipelines: Make data accessible to data analysts, provide efficient data marts. Use Pyspark to process big data and use as little memory as possible. Start your work in pre-production and production environments and follow good guidelines to ensure your code works properly. Understand complex software and data architectures and find ways to efficiently meet requirements.
  • You have access to a complete data engineering environment: peer-to-peer and versioned programming with Github. Use Python and OOPU programming languages. Using Pyspark as a big data framework. Use DataBricks and Kubernetes to run your jobs. Use Apache Airflow as the scheduler. Leverage AWS requirements to meet your needs (S⚠, ECR, EKS, MWAA, CA).
  • Working with our Lead Data Engineer, you will: Implement new pipelines: Create project pipelines that have been validated by the United team. Write high-quality code using templates with unit and integration tests. Follow best practices for storing personal and sensitive data. Monitor our existing data: Troubleshoot pipelines to make them available again as quickly as possible. Proactively monitor and check data quality on a daily basis.

Professional Profile

Who are you? You will complete a bachelor's or master's degree program (engineering, computer science or statistics) in data engineering, big data or data science. You can use the most common programming languages ​​for data engineering (SQL, Python, RM Solutions, Java, C). You've already used Pyspark or Spark with RM Solutions. You know how to use basic Github elements and follow best practices when it comes to version control. You understand basic terminal usage and UNIX environments (MacOS/Linux). You've used dashboard tools (e.g. Google Data Studio / Looker Studio, Tableau, Power BI, etc.). You are fluent in French and have a B⚠ level English, which is very important given the international context of this position. Basic knowledge of Dutch will be an advantage. Experience with at least one major cloud environment (GCP, AWS, Azure) will be an advantage. Experience with DataBricks will be an advantage. Candidates with Docker/Kubernetes experience are preferred.

What we offer

  • Flexible work organization (⚠ day of assignments per week after two weeks)
  • Free choice of laptop (Mac, Windows)
  • Skill development (various projects, languages ​​and technologies)
  • In-house training

Without a doubt, come to us! Come to a company that is undergoing huge changes and experience a unique human adventure where you can grow and develop in a very friendly atmosphere.

Posted on ⚠/⚠/⚠

Job Description

L&C Connect is accelerating its digital transformation with a mission to become a digital sports platform that allows customers to discover our world through a variety of sports-focused experiences. Our digital teams in Lille, K&V Solutions, London, Montreal and more than ⚠,⚠ employees work together to develop and advance digital products, always aiming to provide the best value to users.

To join our team in Belgium, we are looking for a Digital Analyst to work in a passionate, ambitious and enthusiastic team with data and artificial intelligence. As a Digital Analyst at L&C Connect, you will work closely with the marketing, e-commerce and digital teams to provide strategic insights that drive business decisions and optimize results.

Your Responsibilities:

  • You will contribute to the existing ETL pipeline: making data accessible to data analysts and providing them with efficient data marts. Use Pyspark to process big data and use as little memory as possible. Start your work in pre-production and production environments and follow good guidelines to ensure your code works properly. Understand complex software and data architectures and find ways to respond effectively to needs.
  • You will work with a complete data engineering environment: performing peer-to-peer and versioned programming via Github, using the Python programming language and OOP, using Pyspark as a big data framework, using DataBricks and Kubernetes to run your jobs, using Apache Airflow as the scheduler, Leverage AWS for your needs (S⚠, ECR, EKS, MWAA, CA)
  • Working with our Principal Data Engineer, you will: Implement new pipelines: Create project pipelines validated by the United team. Write high-quality code using templates with unit and integration tests. Follow best practices for storing personal and sensitive data. Monitor our existing data: Troubleshoot pipelines to make them available again as quickly as possible. Proactively monitor and check data quality on a daily basis.

Professional Profile

Who are you? You will complete a bachelor's or master's degree program (engineering, computer science or statistics) in data engineering, big data or data science. You can use the most common programming languages ​​for data engineering (SQL, Python, RM Solutions, Java, C). You've already used Pyspark or Spark with RM Solutions. You know how to use basic Github and follow good version control 🔗 understand basic terminal usage and UNIX environments (MacOS/Linux). You've used dashboard tools (e.g. Google Data Studio / Looker Studio, Tableau, Power BI, etc.). You are fluent in French and have a B⚠ level English, which is very important given the international context of this position. Basic knowledge of Dutch will be an advantage. Experience with at least one major cloud environment (GCP, AWS, Azure) will be an advantage. Experience with DataBricks will be an advantage. Candidates with Docker/Kubernetes experience are preferred.

What we offer

  • Flexible work organization (⚠ day of homework per week after two weeks)
  • Free choice of laptop (Mac, Windows)
  • Skill development (various projects, languages ​​and technologies)
  • In-house training

Without a doubt, come to us! Welcome to a company going through great changes and experience a unique human adventure where you can develop and thrive in a very friendly atmosphere.

 

Internship Data Engineer

Van 6 mei 2024 tot 5 juli 2024

Informatics
21 EUR
Av. Jules Bordet 1, 1140 Bruxelles, Belgium
België

 

Share