Thank you for visiting the Rackspace Community
The The Community is live! Post new content or topics so our teams can assist.

Please contact your support team if you have a question or need assistance for any Rackspace products, services, or articles.

Hadoop and CPU

Hello,

It's probably not a simple answer but in exploring the possibilities of utilising our existing hardware Infrastructure to support a Hadoop I'm interested in the abilities of KAP to share system resources.I believe disks could be shared by simple allocation during system installation (Kognitio could be apportioned a tiny fraction if HDFS is used to permanently house the data)I also understand the limitations that can be placed into the Kognitio configuration to limit the RAM made available to KAP, presumably something similar can be done on the Hadoop cluster to grow and reduce RAM availability based on requirements (eg. How intensive are your Map/Reduce jobs). I understand KAP to reserve ~90% of RAM resources on the DB nodes leaving ~10% by default. What about CPU, will KAP recover gracefully when a poorly written or intensive Hadoop job is executed hogging available CPU on 1 or many nodes. Will it behave similar to standard SQL query that hogs all the CPU of 1 node and recover gracefully when said Hadoop job is killed or finishes its processing? I'm concerned about the support teams ability to monitor performance issues in KAP if analysts were allowed to run intensive Hadoop jobs in the same cluster.

Please help

I didn't find the right solution from the internet.

References:
kognitio.com/.../viewtopic.php
[url=https://jobs.vidzzy.com/hire-web-video-animation-company/]Web animation quote[/url]

Thank you