Algorithm Engineer

Reports to | CTO
Location | D.C. Metro Reg

Job Purpose

As an Algorithm guru with WhiteHawk, you will be a member of a team that develops and improves the modeling and algorithms of agents on the basis of personal factors in order to optimize results. Techniques used include aspects of queuing theory, probability, statistics, data mining, optimization, and simulation. The engineering team is responsible for the development and maintenance of an internal tool that facilitates the modeling and algorithm specifics for each implementation.

Duties and Responsibilities

As an Algorithm Engineer with WhiteHawk, you will…

  1. Develop solutions combining data blending, profiling, mining, statistical analysis, and machine learning, to better define and curate models, test hypothesis, and deliver key insights

  2. Collaborate with BI and other developers to ensure development and delivery of analytics solutions

  3. Build on a deep understanding of our content, user, and vendor information to develop rich analytics, engagement, and promotional offerings

  4. Provide technical guidance in a number of aspects of data science and engineering including: hypothesis testing, analysis, modeling, and production deployment, especially in an ecosystem​

Qualifications

As an Algorithm Engineer you will work with data engineers and analysts to identify data sources for discovery and profiling, and define appropriate logical models that accurately reflect existing business processes

A WhiteHawk Algorithm Engineer will have...

  1. A Bachelor’s Degree in the field of Computer Science or Engineering, Mathematics or focus on        statistical analysis or equivalent experience required (Masters degree in related proficiencies is a plus

  2. 3 - 5 years of data analytics experience, working in large scale/distributed SQL, NoSQL, and/or Hadoop environments

  3. Proficiency in building data pipelines or related ETL pipelines within large-scale distributed frameworks as well as related query and processing technologies such as Spark, Impala, Hive, or Presto

  4. Thorough understanding of summary statistics, machine learning, natural language processing, and mathematically focused programming languages such as R and Python

  5. Experience leveraging Elasticsearch for ingest and query design, or related solutions, such as Kibana, etc.

  6. Experience leveraging messaging and queuing solutions such as Kafka, RabbitMQ,

  7. Experience working in hybrid cloud environments a plus

 

If this sounds like you, please send your cover letter and resume to [email protected]

We hope you will join us!