Critical role for a company that has a significant data footprint and uses this data to deliver the best experience possible to their customers.
The largest online bingo website in Europe.
Last year, they were acquired by one of the largest businesses on the FTSE 100.
They currently have 500 employees.
The tech team
Their tech team is just over 200 in size but they've just set aside £2 million to grow this team to 250 in the next 12-18 months.
Tech is split into 5 teams.
- Game Studio (90)
- Platform (30)
- Player experience (4)
- InfoSec + IT (25)
- Data team (8)
They plan to double the data team in the next 12-18 months.
Why work here?
They prioritise tech. It's not seen as a cost centre.
There's also a lot of freedom when it comes to what tech they use - it's all about using the right tools for the job.
There's a lot of intelligent people who work here so you'll be challenged (in a good way).
They believe in challenging the status quo.
Their employment engagement rate is really high and they make decisions based on employee feedback.
It's a very inclusive workplace; they've just employed a well-being, inclusion and diversity business partner.
They have a significant data footprint and they use this data to deliver the best experiences possible to our players.
Their Data Engineers play a critical role in making sure that their data is where it needs to be when it needs to be.
The Data Engineer is a key member of the Data Engineering team.
You'll work closely with the Product and Platform Engineering teams, Game Studio, Marketing and CX teams.
You'll ensure that the data platform is fit for purpose and capable of meeting the needs of the business.
• Design and implement data pipelines to collect data from various sources, such as databases, logs, APIs, and external data providers.
• Ensure data is ingested efficiently and reliably, with error handling and data validation mechanisms in place.
• Cleanse, preprocess, and transform raw data into a usable format for analytics and reporting.
• Apply data normalization, aggregation, and enrichment as needed.
• Choose appropriate storage solutions, such as data warehouses, data lakes, or NoSQL databases, based on data requirements.
• Optimize data storage for performance, cost, and scalability.
ETL (Extract, Transform, Load) Processes
• Develop and maintain ETL processes to move data from source to destination systems.
• Schedule and automate ETL jobs to run at regular intervals.
• Create and maintain data models, including dimensional models for analytics and relational schemas for transactional data.
• Ensure data models align with business requirements and are optimized for query performance.
Data Quality and Governance
• Implement data quality checks and data profiling to identify and address data quality issues.
• Enforce data governance policies and ensure compliance with data privacy regulations.
• Integrate data from disparate sources to create a unified view of data for reporting and analysis.
• Handle schema evolution and versioning to accommodate changes in data sources.
• Monitor and optimize data pipelines and database performance to meet service-level agreements (SLAs).
• Identify and resolve bottlenecks in data processing.
Scalability and Resilience
• Design data pipelines and storage systems to scale horizontally and handle increasing data volumes.
• Implement fault tolerance and disaster recovery mechanisms.
• Ensure data security by implementing access controls, encryption, and authentication measures.
• Monitor and audit data access for compliance and security breaches.
• Maintain thorough documentation of data pipelines, data dictionaries, and data lineage.
• Document ETL processes, data sources, and transformations for transparency and knowledge sharing.
• Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs.
• Work closely with data architects and data stewards to align data engineering efforts with the overall data strategy.
Tooling and Technology
• Stay current with data engineering tools and technologies, including ETL tools, data integration platforms, and big data frameworks like Apache Spark.
• Evaluate and select the right tools for specific use cases.
• Continuously seek opportunities to optimize data pipelines, reduce technical debt, and improve data engineering processes.
• Keep up with industry best practices and emerging trends in data engineering.
Monitoring and Alerting
• Implement monitoring and alerting systems to proactively identify and respond to issues in data pipelines.
• Establish automated notifications for pipeline failures and performance anomalies.
• Technical expertise with data models, data mining, and segmentation techniques
• Knowledge of programming languages (e.g., C#, SQL and Python)
• Hands-on experience with SQL databases (e.g., Redshift, MsSQL, PostgreSQL)
• Hands-on experience with data tooling (e.g., Airflow, dbt, AWS Kinesis)
Get in touch if you think this role is worth speaking about - email@example.com or book a call in my diary - www.calendly.com/tomhainton
If you have a CV, please apply and attach a copy.