Use Starter Kit to launch
AdTech project
You don’t have to start from scratch!
After years in AdTech & Big data projects, we learned how to meet deadlines, fit into budgets and mitigate project risks. Thousands of hours spent on research and development, meetings and communication have now been embedded into our Project Starter Kit.

The Starter Kit comprises several modules ready to be used for an easy and risk free launch of our client’s projects. These modules have been developed while solving actual business related problems of our clients and following the best practices in software development and project management.

A typical module contains:
  • Business flow documentation templates;
  • Templates of architectural documents;
  • We have selected the right technologies and open source libraries proven to work for multiple tasks in various production environments;
  • Deployment scripts;
  • Our own custom libraries and unique code templates;

To give you a general idea, a few examples of modules we have developed

Business data storage, control and monitoring

This module includes a set of assets and expertise (like other modules do):
  • Template of Java server based on Spring Boot and/or micro-services;
  • REST API with Swagger annotations
  • MySQL or Postgres business database built with the use of proven failsafe and backup practices;
  • Template of React based WebApps;
  • Optional desktop and mobile templates (web/mobile/desktop apps can share up to 90% of source code)
  • Deployment scripts (DEV/QA/PRODUCTION);
High performance & low latency endpoints

In Big Data projects, the important part of the overall architecture is the endpoints for external events like impressions, clicks and feedbacks.

The key reason for this discussion is the eventual data consistency and the price of reports precision.

Imagine a system processing 50+ billions events per day, or half a million per second. It is pretty crazy and the loss of proven statistics of your customers can cost you dear.

We use data queue solutions like Apache Kafka for stable performance, which leads to one more module to describe

Replay: Sometimes you need to replay the previous day or even month worth of data

Sometimes you may have to due to various reasons: a hardware failure or software issue, a database crash (yes, it still happens even in cloud based databases).

S3 and similar solutions provide the backup of source data and it is possible to replay previous periods and have eventual consistency.

Data aggregation and processing

We have also taken care of data storage and aggregation and have a few options here: the choice will be made based on your dataset size, the number of events per day and data structure.

Machine & Deep learning

Depending on your needs we can use various techniques to analyse and apply in real time (or post factum) the data gathered by machine or deep learning algorithms.

To summarize, we got you covered in most relevant cases, and this approach might save up to 30% in costs compared to similar projects started from scratch.