Find Jobs
Hire Freelancers

Need someone with Spark Scala, Hadoop and Hive Knowledge

₹12500-37500 INR

Cerrado
Publicado hace más de 1 año

₹12500-37500 INR

Pagado a la entrega
Any one having the knowledge of following technologies can Bid: -programming language Scala must, python as well -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka
ID del proyecto: 34535500

Información sobre el proyecto

7 propuestas
Proyecto remoto
Activo hace 2 años

¿Buscas ganar dinero?

Beneficios de presentar ofertas en Freelancer

Fija tu plazo y presupuesto
Cobra por tu trabajo
Describe tu propuesta
Es gratis registrarse y presentar ofertas en los trabajos
7 freelancers están ofertando un promedio de ₹22.786 INR por este trabajo
Avatar del usuario
Hello. I am an expert in Big Data analytics and have 3 years of experiences in this field. And I have been worked as a data analyst in big data project team and mastered data sources(Strucutred and undstructured data), data cleaning and data mining, ETL using Apache Spark(python,scala). I have much experiences in Datawarehousing with Hive and HBase and Apache Kafka. And I mastered Data visualization with Tableau,Pentaho,Grafana. So I think that I am perfectly fit to your project. If you want to hire me, please contact with me on chat. Thanks.
₹25.000 INR en 7 días
0,0 (0 comentarios)
0,0
0,0
Avatar del usuario
I have knowledge in software testing including web and mobile testing. I have handled 4 testing project. I have knowledge in automation testing including web and mobile using appium .
₹25.000 INR en 7 días
0,0 (0 comentarios)
0,0
0,0
Avatar del usuario
Hi, I have 8 years experience in IT industry and 3+ years of experience in big data technologies like spark, scala, Hadoop, kafka, Unix , hive, sql, junit etc. Would be a good fit for your requirement. Thanks, Kopal
₹25.000 INR en 15 días
0,0 (0 comentarios)
0,0
0,0
Avatar del usuario
Hello, I can help you in the project. Computer engineer expert in Big data. Throughout my professional career I have developed my expertise in implementing data storage, transformation and visualization solutions. Highlight the development of real-time flows and service APIs to help business needs. Going into detail about my professional career, I have the following experiences: Design of architectures based on distributions: Cloudera/Hortonwork and Azure Cloud. Distributed data storage and processing: DFS, Hive, Impala and Kudu. NoSQL Database: MongoDB, Couchbase, Cassandra. Data ingestion in Big Data environments: Apache Nifi, Flume, kafka. Processing data in memory in Batch processes and Streaming: Apache Spark. User Tools: Hue, Oozie and Sqoop. Indexing and information search: ELK, SOLR. Maching Learning Project Methodology for Models predictive. Programming languages: Java, Kotlin, Scala, Python, R, C++, C, VHDL, JavaScript, HTML, AngularJs. Knowledge in data processing libraries: Numpy, Pandas, sickit-learnig. Knowledge of network calculation tools Neural TensorFlow and Keras. Knowledge in Anaconda and Jupyter Notebook. Knowledge in Azure, Azure Data Factory, Databricks.
₹25.000 INR en 7 días
0,0 (0 comentarios)
0,0
0,0
Avatar del usuario
Currently working as Hadoop administrator on Cloudera distribution for 3 clusters ranges from POC clusters to PROD clusters. Involved in designing and deploying a multitude application utilising almost all of the AWS stack. Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & review Hadoop log files. Architecture design and implementation of hadoop deployment, configuration management, backup, and disaster recovery systems and procedures. Responsible for cluster availability and experienced on ON-call support Experienced in Setting up the project and volume setups for the new Hadoop projects. Involved in snapshots and HDFS data backup to maintain the backup of cluster data and even remotely. Experience in importing and exporting data from different database like MySQL, Oracle, RDBMS. Good Knowledge on Kerberos Security. Responsibilities Installing and configuring Linux Administration and monitoring Linux Installing, Upgrading and Managing Hadoop Yarn Clusters Configured Hadoop High Availabilty Mapreduce performance tuning Managing and reviewing Hadoop and HBase log files. Provisioning, installing, configuring, monitoring, and maintaining HDFS, Yarn, HBase, Flume, Sqoop, Oozie, Pig, Hive Recovering from node failures and troubleshooting common Hadoop cluster issues Scripting Hadoop package installation and configuration.
₹22.000 INR en 7 días
0,0 (0 comentarios)
0,0
0,0

Sobre este cliente

Bandera de INDIA
UJJAIN, India
5,0
40
Forma de pago verificada
Miembro desde oct 27, 2018

Verificación del cliente

¡Gracias! Te hemos enviado un enlace para reclamar tu crédito gratuito.
Algo salió mal al enviar tu correo electrónico. Por favor, intenta de nuevo.
Usuarios registrados Total de empleos publicados
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
Cargando visualización previa
Permiso concedido para Geolocalización.
Tu sesión de acceso ha expirado y has sido desconectado. Por favor, inica sesión nuevamente.