Exacaster operates one of the largest big-data environments in the Baltics and personalize offers for 40 million consumers daily for the clients in the Baltics, USA, Central, South America and the Caribbean. Our services cover a full range of data services from big data management & engineering to data-driven customer value management solutions.
Exacaster is a Lithuanian company that was established more than 10 years ago, with the main office in Vilnius, capital of Lithuania.
As the team, we share our unique culture that is based on our core values: ownership, growth, open communication, meaningful relationships and customer obsession. We truly believe that in order to better understand our customers and deliver highest quality solutions while supporting our company’s purpose: “Empower others to use data for good and create impactful change!”, we must be close to our biggest customers!
If you want to dig deeper into the Big data field, progress quickly in your career, and become an advanced Data Engineer – you should definitely apply for Exacaster. We are looking for a Junior Big Data Engineer in full remote & start your journey now!
About the team :
We do advanced solutions, and we’re expanding. Your future team specializes in projects related to data preparation, transformation, and aggregation on a large data scale.
Team main challenge is to solve real business problems for our clients, leveraging Hadoop stack and cloud solutions such as AWS and Microsoft Azure, in addition to core data warehousing tools and other big data-related technologies.
We use common data architecture practices to architect, design, and develop data/analytic platforms (e.g., data lakes, lake house, data warehouses) that are used to produce analytical products like reports, dashboards, ML models, etc.
We are talking about petabytes of data.
Focus on ensuring Data Quality, analyze and solve Big Data problems for our LATAM client and also develop your learner mindset.
Your main responsibilities:
- Build data pipelines to pull together information from different internal or external data sources after preparation data for analytical use.
- Develop and support ETL processes.
- Work on code change requests.
- Analyze and solve problems coming from our clients’ side based on data engineering topics.
- Ensure Data Quality.
We hope you have:
- Principal RDBMS knowledge.
- SQL knowledge.
- Basic understanding of how data pipelines work (what ETL means).
- Knowledge or willingness to learn of at least one programing language from the list: Python/Scala/Java.
- Native Spanish speech and good in English.
Bonus points if you have:
- Hadoop/Spark/AWS knowledge.
- Monthly salary for this position 1500-2000 USD Gross.
- Participation in the company’s stock options program.
- Benefits & Personal learning budget 2000 USD/ year.
- Every second Friday – 0,5 day dedicated for learning.
- Ownership in your role.
- Mindletic – the app for your mental & emotional health.
- All the support you need from our experienced team to become an even better professional.
- And the most important thing – you will be part of a great international team!
Are challenges acceptable?