Category: Information Technology
Location: Kraków, małopolskie
Kraków, małopolskie, Poland
Tesco Technology is multi-functional and specialist team that drives operational excellence of services improves scale for our systems and processes globally and creates business leading capabilities.
We are an agile team of an industry-leading team of engineers. We create the future continuous integration and delivery tools for Colleague and Customer & Loyalty areas, solving problems, and developing new features through quality, scalable, performant, and maintainable technical solutions. The solutions that we are responsible for will have a global reach, impacting hundreds of thousands of Tesco colleagues worldwide.
We operate in a DevOps philosophy. We take responsibility for the software through its entire lifecycle. We practice continuous integration, delivery, and support of our code through to production and beyond.
As Tech Hub we cooperate within the group of Tesco Technology Hubs located in the UK, Poland, Hungary, and India.
Tesco is a diverse and exciting employer, dedicated to being #aplacetogeton, providing career-defining opportunities to all of our colleagues. If you chose to join our business, we will provide you with:
If that sounds exciting, then we'd love to hear from you.
We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimising our data and data pipeline architecture as well as optimising data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up. The Data Engineer will support our software developers database architects data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams systems and products. The right candidate will be excited by the prospect of optimising or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
Create and maintain optimal data pipeline architecture
Assemble large complex data sets that meet functional / non-functional business requirements.
Identify design and implement internal process improvements: automating manual processes optimising data delivery re-designing infrastructure for greater scalability etc.
Build the infrastructure required for optimal extraction transformation and loading of data from a wide variety of data sources
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive Product Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure
Create data tools for analytics and data scientist team members that assist them in building and optimising our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
- Hadoop hive
- any stream processing
Good to have skills:
- Functional programming
- Kafka and basics of containerization/Kubernetes