Skip navigation EPAM

Big Data / Data Science


What we do

We turn customer data into intelligence, deploying platforms and services that enable any business to make data-driven business decisions. EPAM’s Big Data capabilities include:

  • Developing a data strategy to support business transformation while our teams help customers define and implement data-driven strategies
  • Determining fast discovery processes to provide the customer with an engagement approach that gets results fast
  • Implementing practical solutions in areas like digital marketing, data monetization, data turn-around acceleration, real-time analytics, proactive cybersecurity and intelligent asset management
  • Technical advisory, data modeling, advanced R&D and custom development
  • Managed services
  • Marketing solutions including segmentation, churn, LTV, cross-selling, etc.
  • Manufacturing solutions including predictive maintenance, quality management process optimization, etc.
  • Retail big data solutions including demand forecasting, market basket analysis, fraud detection and more
  • Advanced methods and techniques including NLP, text mining, machine learning (SVM, random forests, boosted trees, neural networks, deep learning), statistical analysis and data visualization
  • Search engine engineering
  • Semantic search and knowledge management
  • Search analytics and insights
  • Chatbots and virtual advisors
  • Creating infrastructure for HA clusters
  • Integrating systems with enterprise security
  • Orchestrating CI / CD
  • Setting up network infrastructure in bare metal and cloud environments

Fast Facts


Talented professionals


Projects in progress


Cities: Kyiv, Kharkiv, Lviv

Industries We Serve

Financial Services

Travel & Hospitality

Software & Hi-Tech

Retail & Distribution

Media & Business Information Services

Life Sciences & Healthcare

People You May Work With

System Engineer Team Lead

Solution Architect

Big Data Solutions / DevOps Evolution

Our Big Data practice started around 2011 with a very small team and emerged as a separate practice in 2017 with a total number of 242 professionals.

As DevOps grew quickly, it became tightly connected with the development of virtual server technologies. Focused on effective delivery and cloud computing, we started our practice in early 2007, much earlier than others. We experimented with self-service technologies and built a private cloud for educating engineers. From 2012 to 2014, we launched and developed a technical community called BIRDS and continued developing hybrid EPAM cloud skills. In 2016, our DevOps practice expanded to 1,160 total professionals across many different locations.

Our Processes

Big Data and DevOps processes are a part of our development team processes, so all Agile schemes and toolsets are applicable to them.

To be efficient in providing end-to-end solutions for our customers, we have different roles in each team:

  • PMs
  • Solution Architects
  • Developers
  • Data Scientists / Modelers / Analysts
  • Business Analysts
  • Testers

For DevOps, we offer a variety of positions:

  • Junior / Middle / Senior Engineer
  • Team Lead
  • System Architect

Technologies & Tools We Use

We mostly focus on the following technology stack, yet we are always tracking new technologies to create the best products and solutions for our customers.

Technologies, frameworks and programming languages:

  • Java
  • Spark
  • Python
  • Scala
  • ELK stack
  • Kafka
  • Cassandra
  • Sqoop
  • Flume
  • Mongo DB
  • R Studio
  • Zeppelin
  • Jupyter and many others.

We widely participate in high-profile projects that utilize:

  • Apache
  • Bigtop, Apache Hadoop, Apache HBase, Apache Hive, Apache Ignite, Apache Spark, Apache Zeppelin, Apache Hawk (incubating), Apache Cassandra, Apache Oozie, Apache Zookeeper Apache Groovy
  • OpenJDK
  • Akka
  • OpenStack
  • Drupal
  • Linux kernel
  • Xen
  • Merlin

DevOps-related technologies:

  • Amazon, Microsoft Azure, Google Cloud (including special services)
  • Miscellaneous SDKs (Java, Python, Ruby, .Net, NodeJS)
  • Config Man Tools: Puppet, Chef, Ansible

Life in the Big Data Solutions / DevOps Team

Apart from project life, our product teams are very friendly. We arrange community meetings, share our knowledge during webinars, participate in EPAM’s Software Engineering Conference (SEC), and partake in assessment sessions and mentoring programs. There is of course time for fun, so we share a lot of quality time together during our teambuilding sessions.