Spark cassandra connector connection error , no more host to try -
i have issue related datastax spark-cassandra-connector. when trying test our spark-cassandra connections, use bellow code. problem code throw exception after time half hour. think there connection issue, can help, stuck.
sparkconf conf = new sparkconf(true) .setmaster("local") .set("spark.cassandra.connection.host", config.cassandra_contact_point) .setappname(config.cassandra_db_name) .set("spark.executor.memory", config.spark_executor_memory); sparkcontext javasparkcontext = new sparkcontext(conf); sparkcontextjavafunctions functions = cassandrajavautil.javafunctions(javasparkcontext); for(;;){ javardd<objecthandler> obj = functions.cassandratable(config.cassandra_db_name, "my_users", objecthandler.class); system.out.println("#####" + obj.count() + "#####"); }
error:
java.lang.outofmemoryerror: java heap space @ org.jboss.netty.buffer.heapchannelbuffer.slice(heapchannelbuffer.java:201) @ org.jboss.netty.buffer.abstractchannelbuffer.readslice(abstractchannelbuffer.java:323) @ com.datastax.driver.core.cbutil.readvalue(cbutil.java:247) @ com.datastax.driver.core.responses$result$rows$1.decode(responses.java:395) @ com.datastax.driver.core.responses$result$rows$1.decode(responses.java:383) @ com.datastax.driver.core.responses$result$2.decode(responses.java:201) @ com.datastax.driver.core.responses$result$2.decode(responses.java:198) @ com.datastax.driver.core.message$protocoldecoder.decode(message.java:182) @ org.jboss.netty.handler.codec.oneone.onetoonedecoder.handleupstream(onetoonedecoder.java:66) @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:296) @ org.jboss.netty.handler.codec.frame.framedecoder.unfoldandfiremessagereceived(framedecoder.java:462) @ org.jboss.netty.handler.codec.frame.framedecoder.calldecode(framedecoder.java:443) @ org.jboss.netty.handler.codec.frame.framedecoder.messagereceived(framedecoder.java:310) @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:268) @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:255) @ org.jboss.netty.channel.socket.nio.nioworker.read(nioworker.java:88) @ org.jboss.netty.channel.socket.nio.abstractnioworker.process(abstractnioworker.java:108) @ org.jboss.netty.channel.socket.nio.abstractnioselector.run(abstractnioselector.java:318) @ org.jboss.netty.channel.socket.nio.abstractnioworker.run(abstractnioworker.java:89) @ org.jboss.netty.channel.socket.nio.nioworker.run(nioworker.java:178) @ java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1110) @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:603) @ java.lang.thread.run(thread.java:722) 19:11:12.311 debug [new i/o worker #1612][com.datastax.driver.core.connection] defuncting connection /192.168.1.26:9042 com.datastax.driver.core.transportexception: [/192.168.1.26:9042] unexpected exception triggered (java.lang.outofmemoryerror: java heap space) @ com.datastax.driver.core.connection$dispatcher.exceptioncaught(connection.java:614) @ org.jboss.netty.channel.simplechannelupstreamhandler.handleupstream(simplechannelupstreamhandler.java:112) @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:564) @ org.jboss.netty.channel.defaultchannelpipeline$defaultchannelhandlercontext.sendupstream(defaultchannelpipeline.java:791) @ org.jboss.netty.handler.codec.oneone.onetoonedecoder.handleupstream(onetoonedecoder.java:60) @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:564) @ org.jboss.netty.channel.defaultchannelpipeline$defaultchannelhandlercontext.sendupstream(defaultchannelpipeline.java:791) @ org.jboss.netty.handler.codec.frame.framedecoder.exceptioncaught(framedecoder.java:377) @ org.jboss.netty.channel.simplechannelupstreamhandler.handleupstream(simplechannelupstreamhandler.java:112) @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:564) @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:559) @ org.jboss.netty.channel.channels.fireexceptioncaught(channels.java:525) @ org.jboss.netty.channel.abstractchannelsink.exceptioncaught(abstractchannelsink.java:48) @ org.jboss.netty.channel.defaultchannelpipeline.notifyhandlerexception(defaultchannelpipeline.java:658) @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:566) @ org.jboss.netty.channel.defaultchannelpipeline$defaultchannelhandlercontext.sendupstream(defaultchannelpipeline.java:791) @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:296) @ org.jboss.netty.handler.codec.frame.framedecoder.unfoldandfiremessagereceived(framedecoder.java:462) @ org.jboss.netty.handler.codec.frame.framedecoder.calldecode(framedecoder.java:443) @ org.jboss.netty.handler.codec.frame.framedecoder.messagereceived(framedecoder.java:310) @ org.jboss.netty.channel.simplechannelupstreamhandler.handleupstream(simplechannelupstreamhandler.java:70) @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:564) @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:559) @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:268) @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:255) @ org.jboss.netty.channel.socket.nio.nioworker.read(nioworker.java:88) @ org.jboss.netty.channel.socket.nio.abstractnioworker.process(abstractnioworker.java:108) @ org.jboss.netty.channel.socket.nio.abstractnioselector.run(abstractnioselector.java:318) @ org.jboss.netty.channel.socket.nio.abstractnioworker.run(abstractnioworker.java:89) @ org.jboss.netty.channel.socket.nio.nioworker.run(nioworker.java:178) @ org.jboss.netty.util.threadrenamingrunnable.run(threadrenamingrunnable.java:108) @ org.jboss.netty.util.internal.deadlockproofworker$1.run(deadlockproofworker.java:42) @ java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1110) @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:603) @ java.lang.thread.run(thread.java:722) caused by: java.lang.outofmemoryerror: java heap space @ org.jboss.netty.buffer.heapchannelbuffer.slice(heapchannelbuffer.java:201) @ org.jboss.netty.buffer.abstractchannelbuffer.readslice(abstractchannelbuffer.java:323) @ com.datastax.driver.core.cbutil.readvalue(cbutil.java:247) @ com.datastax.driver.core.responses$result$rows$1.decode(responses.java:395) @ com.datastax.driver.core.responses$result$rows$1.decode(responses.java:383) @ com.datastax.driver.core.responses$result$2.decode(responses.java:201) @ com.datastax.driver.core.responses$result$2.decode(responses.java:198) @ com.datastax.driver.core.message$protocoldecoder.decode(message.java:182) @ org.jboss.netty.handler.codec.oneone.onetoonedecoder.handleupstream(onetoonedecoder.java:66) @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:296) @ org.jboss.netty.handler.codec.frame.framedecoder.unfoldandfiremessagereceived(framedecoder.java:462) @ org.jboss.netty.handler.codec.frame.framedecoder.calldecode(framedecoder.java:443) @ org.jboss.netty.handler.codec.frame.framedecoder.messagereceived(framedecoder.java:310) @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:268) @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:255) @ org.jboss.netty.channel.socket.nio.nioworker.read(nioworker.java:88) @ org.jboss.netty.channel.socket.nio.abstractnioworker.process(abstractnioworker.java:108) @ org.jboss.netty.channel.socket.nio.abstractnioselector.run(abstractnioselector.java:318) @ org.jboss.netty.channel.socket.nio.abstractnioworker.run(abstractnioworker.java:89) @ org.jboss.netty.channel.socket.nio.nioworker.run(nioworker.java:178) ... 3 more 19:11:13.549 debug [new i/o worker #1612][com.datastax.driver.core.connection] [/192.168.1.26:9042-1] closing connection 19:11:12.311 debug [main][com.datastax.driver.core.controlconnection] [control connection] error on /192.168.1.26:9042 connection, no more host try com.datastax.driver.core.connectionexception: [/192.168.1.26:9042] operation timed out @ com.datastax.driver.core.defaultresultsetfuture.ontimeout(defaultresultsetfuture.java:138) @ com.datastax.driver.core.connection$responsehandler$1.run(connection.java:763) @ org.jboss.netty.util.hashedwheeltimer$hashedwheeltimeout.expire(hashedwheeltimer.java:546) @ org.jboss.netty.util.hashedwheeltimer$worker.notifyexpiredtimeouts(hashedwheeltimer.java:446) @ org.jboss.netty.util.hashedwheeltimer$worker.run(hashedwheeltimer.java:395) @ org.jboss.netty.util.threadrenamingrunnable.run(threadrenamingrunnable.java:108) @ java.lang.thread.run(thread.java:722) 19:11:13.551 debug [main][com.datastax.driver.core.cluster] shutting down exception in thread "main" com.datastax.driver.core.exceptions.nohostavailableexception: host(s) tried query failed (tried: /192.168.1.26:9042 (com.datastax.driver.core.connectionexception: [/192.168.1.26:9042] operation timed out)) @ com.datastax.driver.core.controlconnection.reconnectinternal(controlconnection.java:195) @ com.datastax.driver.core.controlconnection.connect(controlconnection.java:79) @ com.datastax.driver.core.cluster$manager.init(cluster.java:1143) @ com.datastax.driver.core.cluster.getmetadata(cluster.java:313) @ com.datastax.spark.connector.cql.cassandraconnector$.com$datastax$spark$connector$cql$cassandraconnector$$createsession(cassandraconnector.scala:166) @ com.datastax.spark.connector.cql.cassandraconnector$$anonfun$4.apply(cassandraconnector.scala:151) @ com.datastax.spark.connector.cql.cassandraconnector$$anonfun$4.apply(cassandraconnector.scala:151) @ com.datastax.spark.connector.cql.refcountedcache.createnewvalueandkeys(refcountedcache.scala:36) @ com.datastax.spark.connector.cql.refcountedcache.acquire(refcountedcache.scala:61) @ com.datastax.spark.connector.cql.cassandraconnector.opensession(cassandraconnector.scala:72) @ com.datastax.spark.connector.cql.cassandraconnector.withsessiondo(cassandraconnector.scala:97) @ com.datastax.spark.connector.cql.cassandraconnector.withclusterdo(cassandraconnector.scala:108) @ com.datastax.spark.connector.cql.schema$.fromcassandra(schema.scala:131) @ com.datastax.spark.connector.rdd.cassandrardd.tabledef$lzycompute(cassandrardd.scala:206) @ com.datastax.spark.connector.rdd.cassandrardd.tabledef(cassandrardd.scala:205) @ com.datastax.spark.connector.rdd.cassandrardd.<init>(cassandrardd.scala:212) @ com.datastax.spark.connector.sparkcontextfunctions.cassandratable(sparkcontextfunctions.scala:48) @ com.datastax.spark.connector.sparkcontextjavafunctions.cassandratable(sparkcontextjavafunctions.java:47) @ com.datastax.spark.connector.sparkcontextjavafunctions.cassandratable(sparkcontextjavafunctions.java:89) @ com.datastax.spark.connector.sparkcontextjavafunctions.cassandratable(sparkcontextjavafunctions.java:140) @ com.shephertz.app42.paas.spark.segmentationworker.main(segmentationworker.java:52)
it looks ran out of heap space:
java.lang.outofmemoryerror: java heap space
the java-driver (what spark-connector uses interacting cassandra) defuncted connection because outofmemoryerror thrown while processing request. when connection defuncted, host brought down.
the nohostavailableexception being raised because of hosts brought down because connections defuncted, because of outofmemoryerror.
do know why may getting outofmemoryerror? heap size? doing cause lot of objects on heap in jvm? possibly have memory leak?
Comments
Post a Comment