Grid Dynamics is hiring a

Big Data Engineer (with Scala and Spark)

Full-Time

About us:

Grid Dynamics is the engineering services company known for transformative, mission-critical cloud solutions for retail, finance and technology sectors. We architected some of the busiest e-commerce services on the Internet and have never had an outage during the peak season. Founded in 2006 and headquartered in San Ramon, California with offices throughout the US and Eastern Europe, we focus on big data analytics, scalable omnichannel services, DevOps, and cloud enablement.

Role overview

We are looking for an experienced and technology-proficient Big Data Engineer to join our team!

Our customer is one of the world’s largest technology companies based in Silicon Valley with operations all over the world. In this project, we are working on the bleeding edge of Big Data technology to develop a high-performance data analytics platform, which handles petabytes datasets.

Project description:

Advertising Platforms group makes it possible for people around the world to easily access informative and imaginative content on their devices while helping publishers and developers promote and monetize their work. Today, our technology and services power advertising in Search Ads in of the biggest search and news providers. Our platforms are highly-performant, deployed at scale, and setting new standards for enabling effective advertising while protecting user privacy. We are looking for an ambitious, self-starter individual who can thrive in an agile environment. You will develop distributed systems to establish, refine and automate our anti-fraud processes across our advertising surfaces.

Responsibilities:

  • Running big data analytics, and building large scale data infrastructure
  • Detecting meaningful data patterns
  • Assuring the integrity of our data
  • Measuring fraud activity and its impact on campaign and user performance
  • Analyzing the results of mitigations against fraud

Requirements:

  • Strong knowledge of Scala
  • In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams)
  • Understanding of the best practices in data quality and quality engineering
  • Ability to quickly learn new tools and technologies
  • English languages are required

Will be a plus:

  • Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.)
  • Experience with JVM build systems (SBT, Maven, Gradle)

We offer:

  • Remote work environment
  • Work on bleeding-edge projects on a team of experienced and motivated engineers
  • Flexible working hours
  • Competitive salary
  • Professional development opportunities
  • Specialization courses.
  • 24 days annual leave + an additional of 5 sick days 
  • Floating Holidays
  • Private medical insurance for employees and their family members
  • Benefits basket with the total value of 650 euro/year gross;
Apply for this job

Please mention you found this job on Startup Jobs. It helps us get more startups to hire on our site. Thanks and good luck!

Get hired quickly
Be the first to apply. Receive an email whenever similar jobs are posted.
Prepare for your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Apply for this job