Overview

Big Data Engineer

Responsibilities

Design the architecture of our big data platform
Perform and oversee tasks such as writing scripts, calling APIs, web scraping, and writing SQL queries
Design and implement data stores that support the scalable processing and storage of our high-frequency data
Maintain our data pipeline
Customize and oversee integration tools, warehouses, databases, and analytical systems
Configure and provide availability for data-access tools used by all data scientists

Job Qualifications and Skill Sets

3 to 5 years of relevant data engineering experience
Bachelor’s degree or higher in computer science, data science, or a related field
Hands-on experience with data cleaning, visualization, and reporting
At least 2 years of relevant experience with real-time data stream platforms such as Kafka and Spark Streaming
Experience working in an agile environment
Familiarity with the Hadoop ecosystem
Experience with platforms such as MapReduce, Apache Cassandra, Hive, Presto, and HBase
Excellent analytical and problem-solving skills
Excellent communication and interpersonal skills