Announcement
Collapse
No announcement yet.
X
-
My problem got solved when I used the required version of Bowtie 0.12.8. Please try that first.
-
Hello,
I was wondering if you were able to resolve this. I am running into this problem.
Leave a comment:
-
Is this a Hadoop configuration error? Standard hadoop tests like Terasort, Wordcount work fine. Any help is greatly appreciated. Thanks.
Leave a comment:
-
New error observed this morning trying to re-run the example - "Broken Pipe"!!!
org.apache.hadoop.streaming.PipeMapRed: java.io.IOException: Broken pipe
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:282)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at java.io.DataOutputStream.flush(DataOutputStream.java:106)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:568)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper_aroundBody2(MapTask.java:444)
at org.apache.hadoop.mapred.MapTask$AjcClosure3.run(MapTask.java:1)
at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:149)
at com.intel.bigdata.management.agent.HadoopTaskAspect.doPhaseCall(HadoopTaskAspect.java:166)
at com.intel.bigdata.management.agent.HadoopTaskAspect.ajc$inlineAccessMethod$com_intel_bigdata_management_agent_HadoopTaskAspect$com_intel_bigdata_management_agent_HadoopTaskAspect$doPhaseCall(HadoopTaskAspect.java:1)
at com.intel.bigdata.management.agent.HadoopTaskAspect.aroundMap(HadoopTaskAspect.java:38)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run_aroundBody0(MapTask.java:377)
at org.apache.hadoop.mapred.MapTask$AjcClosure1.run(MapTask.java:1)
at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:149)
at com.intel.bigdata.management.agent.HadoopTaskAspect.aroundTaskRun(HadoopTaskAspect.java:95)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:351)
at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:260)
Leave a comment:
-
Crossbow 1.2.1 On Hadoop - help needed
We have installed a Crossbow 1.2.1 on our 3 node Hadoop Cluster [running Intel Hadoop Distrib]. After installation, I ran the sample test script ./cb_hadoop –test and that validated that all components (Bowtie, SOAPsnp and SRA toolkit) were installed fine and usable by Crossbow.
After that we tried to run a sample program from e_coli following the instructions here - http://bowtie-bio.sourceforge.net/cr...l.shtml#hadoop. This job involved connecting to a NIH site to download a .SRA file (3 GB) and then use that for processing in further steps. The file is being downloaded fine and written into Hadoop before the Alignment phase starts.
The job runs for a while and then fails, here is the term output:
Crossbow job
------------
Hadoop streaming commands in: /tmp/crossbow-8syPJjsEPD/invoke.scripts/cb.70647.hadoop.sh
Running...
==========================
Stage 1 of 4. Preprocess
==========================
Wed Jun 26 15:27:11 PDT 2013
Warning: $HADOOP_HOME is deprecated.
packageJobJar: [/usr/local/bin/crossbow-1.2.1/Get.pm, /usr/local/bin/crossbow-1.2.1/Counters.pm, /usr/local/bin/crossbow-1.2.1/Util.pm, /usr/local/bin/crossbow-1.2.1/Tools.pm, /usr/local/bin/crossbow-1.2.1/AWS.pm, /tmp/hadoop-root/hadoop-unjar1303404851548420422/] [] /tmp/streamjob4479954697968619538.jar tmpDir=null
13/06/26 15:27:19 INFO mapred.FileInputFormat: Total input paths to process : 1
13/06/26 15:27:19 INFO streaming.StreamJob: getLocalDirs(): [/hdfs/hd1/hadoop/mapred, /hdfs/hd2/hadoop/mapred, /hdfs/hd3/hadoop/mapred, /hdfs/hd4/hadoop/mapred, /hdfs/hd5/hadoop/mapred, /hdfs/hd6/hadoop/mapred, /hdfs/hd7/hadoop/mapred, /hdfs/hd8/hadoop/mapred, /hdfs/hd9/hadoop/mapred, /hdfs/hd10/hadoop/mapred, /hdfs/hd11/hadoop/mapred, /hdfs/hd12/hadoop/mapred]
13/06/26 15:27:19 INFO streaming.StreamJob: Running job: job_201305151132_0043
13/06/26 15:27:19 INFO streaming.StreamJob: To kill this job, run:
13/06/26 15:27:19 INFO streaming.StreamJob: /usr/lib/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=idh-hssdn1:54311 -kill job_201305151132_0043
13/06/26 15:27:19 INFO streaming.StreamJob: Tracking URL: http://idh-hssdn1:50030/jobdetails.j...305151132_0043
13/06/26 15:27:20 INFO streaming.StreamJob: map 0% reduce 0%
13/06/26 15:27:28 INFO streaming.StreamJob: map 33% reduce 0%
13/06/26 15:27:35 INFO streaming.StreamJob: map 67% reduce 0%
13/06/26 15:27:48 INFO streaming.StreamJob: map 100% reduce 0%
13/06/26 16:21:58 INFO streaming.StreamJob: map 100% reduce 100%
13/06/26 16:21:58 INFO streaming.StreamJob: Job complete: job_201305151132_0043
13/06/26 16:21:58 INFO streaming.StreamJob: Output: hdfs:///crossbow/intermediate/70647/preproc
==========================
Stage 2 of 4. Align
==========================
Wed Jun 26 16:22:00 PDT 2013
Warning: $HADOOP_HOME is deprecated.
packageJobJar: [/usr/local/bin/crossbow-1.2.1/Get.pm, /usr/local/bin/crossbow-1.2.1/Counters.pm, /usr/local/bin/crossbow-1.2.1/Util.pm, /usr/local/bin/crossbow-1.2.1/Tools.pm, /usr/local/bin/crossbow-1.2.1/AWS.pm, /tmp/hadoop-root/hadoop-unjar7668567870786726388/] [] /tmp/streamjob3181086113362656316.jar tmpDir=null
13/06/26 16:22:08 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/06/26 16:22:08 WARN snappy.LoadSnappy: Snappy native library is available
13/06/26 16:22:08 INFO snappy.LoadSnappy: Snappy native library loaded
13/06/26 16:22:08 INFO mapred.FileInputFormat: Total input paths to process : 21
13/06/26 16:22:08 INFO streaming.StreamJob: getLocalDirs(): [/hdfs/hd1/hadoop/mapred, /hdfs/hd2/hadoop/mapred, /hdfs/hd3/hadoop/mapred, /hdfs/hd4/hadoop/mapred, /hdfs/hd5/hadoop/mapred, /hdfs/hd6/hadoop/mapred, /hdfs/hd7/hadoop/mapred, /hdfs/hd8/hadoop/mapred, /hdfs/hd9/hadoop/mapred, /hdfs/hd10/hadoop/mapred, /hdfs/hd11/hadoop/mapred, /hdfs/hd12/hadoop/mapred]
13/06/26 16:22:08 INFO streaming.StreamJob: Running job: job_201305151132_0044
13/06/26 16:22:08 INFO streaming.StreamJob: To kill this job, run:
13/06/26 16:22:08 INFO streaming.StreamJob: /usr/lib/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=idh-hssdn1:54311 -kill job_201305151132_0044
13/06/26 16:22:08 INFO streaming.StreamJob: Tracking URL: http://idh-hssdn1:50030/jobdetails.j...305151132_0044
13/06/26 16:22:09 INFO streaming.StreamJob: map 0% reduce 0%
13/06/26 16:22:33 INFO streaming.StreamJob: map 4% reduce 0%
13/06/26 16:22:34 INFO streaming.StreamJob: map 5% reduce 0%
13/06/26 16:22:36 INFO streaming.StreamJob: map 9% reduce 0%
13/06/26 16:22:37 INFO streaming.StreamJob: map 10% reduce 0%
13/06/26 16:22:39 INFO streaming.StreamJob: map 14% reduce 0%
13/06/26 16:22:40 INFO streaming.StreamJob: map 15% reduce 0%
13/06/26 16:22:43 INFO streaming.StreamJob: map 20% reduce 0%
13/06/26 16:22:46 INFO streaming.StreamJob: map 23% reduce 0%
13/06/26 16:22:47 INFO streaming.StreamJob: map 25% reduce 0%
13/06/26 16:22:49 INFO streaming.StreamJob: map 28% reduce 0%
13/06/26 16:22:50 INFO streaming.StreamJob: map 30% reduce 0%
13/06/26 16:22:52 INFO streaming.StreamJob: map 32% reduce 0%
13/06/26 16:22:53 INFO streaming.StreamJob: map 33% reduce 0%
13/06/26 16:22:58 INFO streaming.StreamJob: map 29% reduce 0%
13/06/26 16:22:59 INFO streaming.StreamJob: map 19% reduce 0%
13/06/26 16:23:00 INFO streaming.StreamJob: map 10% reduce 0%
13/06/26 16:23:01 INFO streaming.StreamJob: map 5% reduce 0%
13/06/26 16:23:02 INFO streaming.StreamJob: map 0% reduce 0%
13/06/26 16:23:14 INFO streaming.StreamJob: map 100% reduce 100%
13/06/26 16:23:14 INFO streaming.StreamJob: To kill this job, run:
13/06/26 16:23:14 INFO streaming.StreamJob: /usr/lib/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=idh-hssdn1:54311 -kill job_201305151132_0044
13/06/26 16:23:14 INFO streaming.StreamJob: Tracking URL: http://idh-hssdn1:50030/jobdetails.j...305151132_0044
13/06/26 16:23:14 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201305151132_0044_m_000013
13/06/26 16:23:14 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
Non-zero exitlevel from Align streaming job
elastic-mapreduce script completed with exitlevel 256
This is what I see in the logs:
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /crossbow/intermediate/70647/align/_temporary/_attempt_201305151132_0044_m_000006_2/part-00006 File does not exist. Holder DFSClient_attempt_201305151132_0044_m_000006_2 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1802)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1793)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1848)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1836)
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:719)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1414)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1410)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1408)
Any ideas how to resolve this problem? If needed I can post contents of other logs.
Thanks,
AbhiTags: None
Latest Articles
Collapse
-
by seqadmin
The recent pandemic caused worldwide health, economic, and social disruptions with its reverberations still felt today. A key takeaway from this event is the need for accurate and accessible tools for detecting and tracking infectious diseases. Timely identification is essential for early intervention, managing outbreaks, and preventing their spread. This article reviews several valuable tools employed in the detection and surveillance of infectious diseases.
...-
Channel: Articles
11-27-2023, 01:15 PM -
-
by seqadmin
Microbiome research has led to the discovery of important connections to human and environmental health. Sequencing has become a core investigational tool in microbiome research, a subject that we covered during a recent webinar. Our expert speakers shared a number of advancements including improved experimental workflows, research involving transmission dynamics, and invaluable analysis resources. This article recaps their informative presentations, offering insights...-
Channel: Articles
11-09-2023, 07:02 AM -
ad_right_rmr
Collapse
News
Collapse
Topics | Statistics | Last Post | ||
---|---|---|---|---|
Started by seqadmin, Yesterday, 09:55 AM
|
0 responses
13 views
0 likes
|
Last Post
by seqadmin
Yesterday, 09:55 AM
|
||
Started by seqadmin, 11-30-2023, 10:48 AM
|
0 responses
18 views
0 likes
|
Last Post
by seqadmin
11-30-2023, 10:48 AM
|
||
Started by seqadmin, 11-29-2023, 08:26 AM
|
0 responses
14 views
0 likes
|
Last Post
by seqadmin
11-29-2023, 08:26 AM
|
||
Started by seqadmin, 11-29-2023, 08:12 AM
|
0 responses
14 views
0 likes
|
Last Post
by seqadmin
11-29-2023, 08:12 AM
|
Leave a comment: