Data Engineer
BURN designs, manufactures, and distributes aspirational fuel-efficient cooking products that save lives and forests in the developing world.BURN has revolutionized the global cookstove sector by proving the business case for selling a high quality, locally manufactured and unsubsidized cookstoves.Since 2013, BURN has sold 200,000+ high quality, locally manufactured and unsubsidized jikokoas â„¢ stoves in East Africa. These stoves have helped 1,000,000+ beneficiaries save $39 million in fuel expenditures and 626,221 tons of wood while reducing indoor air pollution by 65%. BURN currently sells ~10,000 stoves per month and intend to double sales by the end of 2017.
About the Role:
BURN is seeking a skilled and experienced Data Engineer or Data Pipeline System Designer to develop & deploy the articulated solution. The role involves designing and deploying an end-to-end data pipeline system that centralizes data from various sources and enables data professionals to easily query the data. Additionally, the system should allow users to easily pull up all relevant information for a product and customer using a consumer-friendly user interface.
Duties and Responsibilities:
- Design and deploy an end-to-end data pipeline system that centralizes and processes large volumes of structured and unstructured data from various sources.
- Develop user-friendly interfaces that enable users to easily pull up relevant information for a product and customer.
- Collaborate with data scientists, data analysts, and other stakeholders to understand data requirements and design data solutions that meet their needs.
- Design and implement efficient data extraction, transformation, and loading (ETL) processes to populate the pipeline with data from various sources.
- Build and maintain robust data pipelines that ensure data is accurate, up-to-date, and easily accessible.
- Develop and maintain data models, data schemas, and data dictionaries.
- Use APIs, batch exports, and SQL queries to extract data from various sources and integrate it into a SQL database.
- Perform data cleaning, data transformation, and data integration tasks to ensure data quality and consistency.
- Collaborate with data analysts, data scientists, and other stakeholders to ensure data is processed and analyzed effectively.
- Monitor and optimize data pipelines to ensure they are performing efficiently.
Success for the role will be measured by delivering within the first few months of the following,
- Successful deployment of the end-to-end data pipeline system, including system implementation, ETL processes, and data handling capabilities within the first few months.
- Data accessibility and usability are measured by ease of use of user-friendly interfaces and speed of accessing relevant information.
- Data quality and consistency are monitored through data accuracy, completeness, consistency, and integrity.
- Collaboration and stakeholder satisfaction are measured by feedback from stakeholders on the effectiveness of data solutions and maintaining positive working relationships.
Skills and Experience:
- Bachelor’s degree in computer science, Information Systems, or related field.
- At least 5 years of experience in designing and deploying end-to-end data pipelines.
- Strong knowledge of SQL, ETL tools, and data warehousing concepts.
- Experience working with APIs, batch exports, and SQL queries to extract data from various sources.
- Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform.
- Strong data analysis and problem-solving skills.
- Experience working with Microsoft Dynamics, open-source data systems like KOBO, and Call Center platforms would be an added advantage.
- Excellent communication skills and ability to work in a team environment.