The company

Adore Me is a platform of digital fashion brands founded on 4 pillars: inclusive sustainability, proprietary technology, differentiated service, and owning brands. Adore Me is the lingerie underdog, disrupting a huge traditional industry: lingerie & apparel industry, with revenues exponentially growing from $1M in 2012 to $160M in 2020.


The tech

The tech factor is strong in our DNA. We have more engineers than marketers, fashion designers or business folks and we’re proud of it.
The end goal is to craft the perfect bra and ship it to millions of women. So we’re talking about scalable e-commerce, coordinating our factories & robots in the distribution center, internal tools that catalyse everything.

We’re one of those scaling technology startups. Think many small teams, dev-ops, millions of users, hundreds of git repos, crazy experimenting.

It’s not all perfect. We’re always evolving and change is everywhere. So we’re either moving from Magento to micro-services, from MySQL databases to a cloud DWH, moving from local environments to a serverless Machine Learning platform and there’s always some technical debt to hunt & kill. We manage a lot of uncertainty and always try to learn from our mistakes. We’re getting there.

The job

You will work in a organised full-stack scrum team, alongside data people.

As a data engineer you’ll be playing all day long with ELT procedures, API/message brokers, data mapping and data modeling, mart layer business calculation processes and reports. Building BI features or tools. We’re all a bit multidisciplinary so you should not be afraid to set up a VM or docker container, investigate some hanging workers or fix a failing unit test.

More concretely, data in Adore Me, can mean: enhance the DWH ecosystem, integrate various machine learning projects on top of it, work with the 30+ distinct sources exposing the data in at least 6 different technologies. Use JSON and nested data as the de-facto format. Migrate the batch processes in stream mode using cloud native technologies like dataflow, airflow, pub/sub, datastore. Build custom BI tools, expanding over the “database” borders.

Keywords

DWH, BI, data, SQL, ELT, ETL, data modeling, batch, data streaming, data pipelines, cloud run, cloud functions, Google Cloud, BigQuery, Python, Looker, AppEngine, Dataflow,Airflow, message broker, agile, scrum, kanban, continuous improvement, pair programming, unit testing, continuous integration.

You

You’re experienced with SQL, DWH and BI. Very comfortable with at least a couple of frameworks and languages (we use Python). Having experience with any cloud data ecosystem helps a lot (we use Google Cloud). Know your data stuff so a little modelling challenge doesn’t scare you and you can survive on a linux box.

You’re passionate about technical excellence. You know data quality processes, how to interact with APIs and message brokers, build ELT flows, standard coding style, code reviews, unit tests, documentation, refactoring, good naming. You have opinions about classical and modern data models, both business and technical KPIs, the major data engineering stacks and design patterns.

You’re a builder and you want to impact the world around you. So you need to get stuff done. Problem solve. Move fast.

You’re optimistic, have a high self-worth, you speak your mind and have strong empathy for your peers. More of a team player than a lone wolf.

The perks

Competitive pay. Private health services. Trips to our Manhattan office. Crazy continuous learning. Playful mood. Foosball, Punching bag. Ping pong. Specialty coffee. Did we mention continuous learning? Hipster mansion near Cismigiu. Lots of couches. Whiteboards everywhere. We write on the walls too. We’re hosting meetups and we’re going to conferences. Friday lunch-party, catered breakfast, craft beer-stocked fridges. Lounge chairs in the garden. Experienced engineers doing no-bullshit work. Macbooks and fancy dev tools.