AWS Hadoop Spark Mongo DB
$30-250 CAD
Pagado a la entrega
I have setup AWS cluster with Hadoop, Spark and Mongo DB nodes
I just need help with ingesting raw data into Hadoop and Spark should read and process this data then store in Mongo DB.
Setup ETL pipeline :
Download and unzip the pageview dataset from a website
Ingest the raw data into Hadoop
Use Spark to read data from the Hadoop cluster and compute the per-language statistics
Store the per-language statistics into MongoDB directly from Spark
Write scripts for automation every month once
Nº del proyecto: #15500314
Sobre el proyecto
13 freelancers están ofertando un promedio de $399 por este trabajo
I have previously worked with Big Data Technologies like Spark and Hadoop and being a data scientist i have also worked with NoSQL databases like Cassandra and MongoDB. I can help you with ur project. Relevant Skills Más
I have a good hand on working with Advanced Excel, R and Python. I have quite a good knowledge of deep learning Algorithm , have also developed dashboards and Shiny Web Application in R Relevant Skills and Experience Más
As per my expertise in this field, I can assure you that I can easily do this task. And my only aim is to provide you with the fastest and the best solution for your job. Relevant Skills and Experience Amazon Web Serv Más
Hi, I have experience in hadoop/spark/java. for more info ping me Relevant Skills and Experience java/hadoop/spark/java Proposed Milestones $222 CAD - final
This work is very similar with the one I did in a past job with the startup company Clickly where I was the main Data Scientist, so I did it before and I believe I am the best person to do it. Stay tuned, I'm still w Más
Hi, We will do this job by weekly $1200 usd .in this we do the hodoop work and informatica or tableau Relevant Skills and Experience hadoop ,informatica,tableau Proposed Milestones $1266 CAD - hadoop ,informatica