How to recover HDFS journal node? -
i have configured 3 journalnodes, let's jn1, jn2, jn3. each of them saves edit log under /tmp/hadoop/journalnode/mycluster...
based on which, started namenode, secondary namenode , bunch of datanode. system runs until 1 day jn2 , jn3 dead. furthermore, disks corrupted.
then purchased new disks , restarted jn2 , jn3. bad thing didn't work anymore.
it keeps complaining
org.apache.hadoop.hdfs.qjournal.protocol.journalnotformattedexception: journal storage directory /tmp/hadoop/dfs/journalnode/mycluster not formatted @ org.apache.hadoop.hdfs.qjournal.server.journal.checkformatted(journal.java:457) @ org.apache.hadoop.hdfs.qjournal.server.journal.geteditlogmanifest(journal.java:640) @ org.apache.hadoop.hdfs.qjournal.server.journalnoderpcserver.geteditlogmanifest(journalnoderpcserver.java:185) @ org.apache.hadoop.hdfs.qjournal.protocolpb.qjournalprotocolserversidetranslatorpb.geteditlogmanifest(qjournalprotocolserversidetranslatorpb.java:224) @ org.apache.hadoop.hdfs.qjournal.protocol.qjournalprotocolprotos$qjournalprotocolservice$2.callblockingmethod(qjournalprotocolprotos.java:25431) @ org.apache.hadoop.ipc.protobufrpcengine$server$protobufrpcinvoker.call(protobufrpcengine.java:585) @ org.apache.hadoop.ipc.rpc$server.call(rpc.java:928) @ org.apache.hadoop.ipc.server$handler$1.run(server.java:2013) @ org.apache.hadoop.ipc.server$handler$1.run(server.java:2009) @ java.security.accesscontroller.doprivileged(native method) @ javax.security.auth.subject.doas(subject.java:422) @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1614) @ org.apache.hadoop.ipc.server$handler.run(server.java:2007)
is there anyway recover jn2 , jn3 living jn1?
really appreciate possible solutions!
thanks, miles
i able fix issues creating missing directory on journal host namenode write its' edit files.
make sure version file created, otherwise org.apache.hadoop.hdfs.qjournal.protocol.journalnotformattedexception.
or copy version file in directory
Comments
Post a Comment