Het beste uit 120 kranten en tijdschriften op één plek.
Elke dag verzamelen we de verhalen die bij jóu passen.
Zonder nepnieuws. Zonder filterbubbel.
Journalism is changing, and we are part of that change. Nobody really knows where it’s going. Everything new thing we try is an experiment.
The experiments we do, we like to do based on informed decisions. And to make those decisions, we use data, lots of data. The data is collected by the clients and the back-end, but this is raw data. And raw data is only good for filling disk space.
We have an awesome ETL pipeline, which is based on PySpark, runs on Google Dataproc and writes to an Amazon Redshift database, so we can visualize and explore the data using Looker. The ETL pipeline combines data from more than 30 tables and buckets, including an event stream of at least 4m events per day.
We’ll get to the nitty gritty description of your skills soon, but first we’ll tell you a bit more about why we chose to join Blendle ourselves. These were the most important reasons for us, in random order:
Journalism is an essential part of every society, but newspapers and magazines see their revenues drop. Every day, fantastic articles are published – much better than the ones you’ll find on free websites – yet fewer people have subscriptions, and therefore more journalists are getting fired. At Blendle, we want to tap into a new revenue source for newspapers and magazines.
Because they can now easily access brilliant articles, without having to get a subscription to every single publication. We find it awesome to build a product that’s used by 1,000,000 people at this stage, which will hopefully be used by even more people soon.
Blendle employs nearly 60 people now, and the atmosphere is great. Below you’ll find some pictures of our office, and our weekend getaways:
We expect you know best how to unlock your full potential, and don’t need any rules for that. Roland for example kicks off his day on an idyllic balcony in Rotterdam with his laptop on his lap, and Jean likes to work at night, talking to himself in our chat program while he’s programming.
On top of a monthly donation to a bank account of your choice, we take good care of you in the office. Every day, lunch is prepared with love, and we’re happy to fix dinner or a massage when needed. Don’t live in Utrecht? Then we’ll cover the expenses for the thing that gets you here: either public transport, or your dear fuel-sipping car.
We’re looking for a data engineer who has experience with dimensional modelling and big data. Neither is a must, but you should at least have some decent experience with modelling business processes, and you should really love diving into new technologies. Moreover, we are looking for a developer who takes pride in his or her work.
The things we currently work with include:
• Google Dataproc
• Amazon Redshift
• Google Analytics
Your main responsibilities will be:
• develop and maintain the ETL process
• troubleshoot problems in production
• collaborate with other teams to identify information that should be available in the data warehouse
• incidentally provide our controller with financial reports
Still here? Great.
Resumés explaining what high school you went to, or what supermarket you mopped the floors of, are not really our thing. Hit the button and we will contact you!