My client is a leading Global Insurer and is proud to have just been awarded as one of the best places to work by Glassdoor, reflecting how much their colleagues love working there. The business has an exciting opportunity within their UK transformation team for a Data Engineer. Data engineering within the business is a technical role predominantly concerned with using Azure DataBricks with the PySpark programming language to transform, clean and improve raw data into a form where it can deliver value and insight to the business.
This opportunity would suit a bright and ambitious individual passionate about data with experience. This opportunity will report into the Chief Data Officer who will fast track you to become a Data Architect as well as give you exposure to a number different areas across the business.
What will you be doing?
- Build detailed knowledge of data sources (internal and external)
- Manage the data to day data asset management, keeping the data asset fresh and resolving issues
- Operate in fast-paced, iterative environment while remaining compliant with information sec policies/standards
- Collaborate with data scientists to prepare data for use in their advanced analytical models
- Help architect and build the strategic advanced analytics data platform
- Build re-usable code and data assets using Software Engineering best practices and code frameworks
- Codify best practices, methodology and share knowledge with other data engineers/scientists in the business.
- Ensure data governance practices are followed with regard to the ACA data asset
- Share best practices across different business units
- Previous experience in a development or engineering role
- Meaningful experience with at least two of the following technologies: Python, Scala, SQL, Java
- Experience or interest in Cloud platforms such as:, Azure, AWS or Databricks
- The ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets
- Meaningful experience in at least one database technology such as:
- Distributed Processing (Spark, Hadoop, EMR)
- Traditional RDBMS (MS SQL Server, Oracle, MySQL, PostgreSQL)
- MPP (AWS Redshift, Teradata)
- NoSQL (MongoDB, DynamoDB, Cassandra, Neo4J, Titan)
- Understanding of Information Security principles to ensure compliant handling and management of data
- Experience in traditional data warehousing / ETL tools (Informatica, Talend, Pentaho, DataStage)
- Ability to clearly communicate complex solutions
If you would like to fast track your career after your first initial years in data and work for a business who truly values its employees please apply now and I will be in touch to discuss the opportunity in more detail.