We develop a wide range products including:
AutoExtract - API for automated e-commerce and article extraction from web pages using Machine Learning.
Crawlera - smart crawling proxy
Scrapy Cloud - a cloud platform for running spiders
Data on Demand - turn-key web scraping services and more!
Come join our fully remote team of over 180 people in 30 countries.
You'll have the chance to work on projects that build and transfer datasets of thousands of millions of records, as well as build the systems that deliver data to current Fortune 500 companies and startups building great products on top of our stack.
Scrapinghub has benefited from Open Source throughout our history. As a way to give back to the community everybody on our team has a chance to contribute to Open Source projects, find out more on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
- Senior Software Engineer (Big Data/AI): You will be designing and implementing distributed systems: large-scale web crawling platform, integrating Deep Learning based web data extraction components, working on queue algorithms, large datasets.
- DevOps Engineer: work closely with our Crawlera developers to make their lives easier through creating automations and handle everything around running, deploying and upgrading the application.
Python Developer: join our Delivery team to work on web crawler development with Scrapy, our flagship open source project.
We develop a wide range products including: AutoExtract - API for automated e-commerce and article extraction from web pages using Machine Learning. Crawlera - smart crawling proxy Scrapy Cloud - a cloud platform for running spiders Data on Demand - turn-key web scraping services and more!
Come join our fully remote team of over 180 people in 30 countries.
You'll have the chance to work on projects that build and transfer datasets of thousands of millions of records, as well as build the systems that deliver data to current Fortune 500 companies and startups building great products on top of our stack.
Scrapinghub has benefited from Open Source throughout our history. As a way to give back to the community everybody on our team has a chance to contribute to Open Source projects, find out more on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
Here are some of our open positions (Check out our website for a full list): https://scrapinghub.com/jobs
- Senior Software Engineer (Big Data/AI): You will be designing and implementing distributed systems: large-scale web crawling platform, integrating Deep Learning based web data extraction components, working on queue algorithms, large datasets.
- DevOps Engineer: work closely with our Crawlera developers to make their lives easier through creating automations and handle everything around running, deploying and upgrading the application.
Python Developer: join our Delivery team to work on web crawler development with Scrapy, our flagship open source project.
You can apply here: https://scrapinghub.com/jobs
If you have any further questions, please feel free to reach me directly at jessica@scrapinghub.com