Key Responsibilities:

Serve as a trusted technical advisor, helping customers solve complex data challenges through scalable, well-architected solutions.

Design, build, and monitor distributed data pipelines to support big data processing, both batch and streaming.

Deliver high-quality, reproducible, and maintainable datasets from structured and unstructured sources at scale.

Collaborate effectively with other data roles to ensure that data solutions align with user and business requirements.

Understand the needs of diverse data producers and consumers to ensure data products meet performance, quality, and usability standards.

Work in Agile environments, delivering value iteratively and continuously improving based on user feedback.

Stay up to date with emerging data technologies and be willing to learn new tools and frameworks as needed.

Contribute to a collaborative culture, whether working on-site, remotely, or in hybrid settings.

Required Skills & Qualifications:

4+ years of experience in a Data Engineering or similar role.

Proven ability to design, develop, and maintain data products that meet consumer needs.

Strong understanding of data processing architectures such as Data Lakes, Data Warehouses, Data Mesh, batch/streaming pipelines.

Proficiency in working with both relational (e.g., PostgreSQL) and NoSQL databases (e.g., MongoDB, Hive).

Practical knowledge of handling diverse data types: tabular, geospatial, time-series, image, text, and graph data.

Experience with big data tools and frameworks such as Hadoop, Spark, and Kafka.

Strong programming skills in Java, Python, and SQL.

Familiarity with containerisation and orchestration tools like Docker and Airflow.

Experience in cloud platforms such as AWS, GCP, or Azure.

Understanding of data security, governance, and privacy principles.

Ability to deliver high-quality work within deadlines, ideally in client-facing environments.

Preferred Skills:

Experience working with cross-functional teams including data science, architecture, and DevOps.

Exposure to CI/CD pipelines and DevOps best practices in data environments.

Familiarity with machine learning workflows and visualization tools.

Argyll Scott Asia is acting as an Employment Business in relation to this vacancy.

Argyll Scott

You must sign in to apply for this position.