Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Crossbow 1.2.1 On Hadoop - help needed

    We have installed a Crossbow 1.2.1 on our 3 node Hadoop Cluster [running Intel Hadoop Distrib]. After installation, I ran the sample test script ./cb_hadoop –test and that validated that all components (Bowtie, SOAPsnp and SRA toolkit) were installed fine and usable by Crossbow.

    After that we tried to run a sample program from e_coli following the instructions here - http://bowtie-bio.sourceforge.net/cr...l.shtml#hadoop. This job involved connecting to a NIH site to download a .SRA file (3 GB) and then use that for processing in further steps. The file is being downloaded fine and written into Hadoop before the Alignment phase starts.

    The job runs for a while and then fails, here is the term output:
    Crossbow job
    ------------
    Hadoop streaming commands in: /tmp/crossbow-8syPJjsEPD/invoke.scripts/cb.70647.hadoop.sh
    Running...
    ==========================
    Stage 1 of 4. Preprocess
    ==========================
    Wed Jun 26 15:27:11 PDT 2013
    Warning: $HADOOP_HOME is deprecated.

    packageJobJar: [/usr/local/bin/crossbow-1.2.1/Get.pm, /usr/local/bin/crossbow-1.2.1/Counters.pm, /usr/local/bin/crossbow-1.2.1/Util.pm, /usr/local/bin/crossbow-1.2.1/Tools.pm, /usr/local/bin/crossbow-1.2.1/AWS.pm, /tmp/hadoop-root/hadoop-unjar1303404851548420422/] [] /tmp/streamjob4479954697968619538.jar tmpDir=null
    13/06/26 15:27:19 INFO mapred.FileInputFormat: Total input paths to process : 1
    13/06/26 15:27:19 INFO streaming.StreamJob: getLocalDirs(): [/hdfs/hd1/hadoop/mapred, /hdfs/hd2/hadoop/mapred, /hdfs/hd3/hadoop/mapred, /hdfs/hd4/hadoop/mapred, /hdfs/hd5/hadoop/mapred, /hdfs/hd6/hadoop/mapred, /hdfs/hd7/hadoop/mapred, /hdfs/hd8/hadoop/mapred, /hdfs/hd9/hadoop/mapred, /hdfs/hd10/hadoop/mapred, /hdfs/hd11/hadoop/mapred, /hdfs/hd12/hadoop/mapred]
    13/06/26 15:27:19 INFO streaming.StreamJob: Running job: job_201305151132_0043
    13/06/26 15:27:19 INFO streaming.StreamJob: To kill this job, run:
    13/06/26 15:27:19 INFO streaming.StreamJob: /usr/lib/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=idh-hssdn1:54311 -kill job_201305151132_0043
    13/06/26 15:27:19 INFO streaming.StreamJob: Tracking URL: http://idh-hssdn1:50030/jobdetails.j...305151132_0043
    13/06/26 15:27:20 INFO streaming.StreamJob: map 0% reduce 0%
    13/06/26 15:27:28 INFO streaming.StreamJob: map 33% reduce 0%
    13/06/26 15:27:35 INFO streaming.StreamJob: map 67% reduce 0%
    13/06/26 15:27:48 INFO streaming.StreamJob: map 100% reduce 0%
    13/06/26 16:21:58 INFO streaming.StreamJob: map 100% reduce 100%
    13/06/26 16:21:58 INFO streaming.StreamJob: Job complete: job_201305151132_0043
    13/06/26 16:21:58 INFO streaming.StreamJob: Output: hdfs:///crossbow/intermediate/70647/preproc
    ==========================
    Stage 2 of 4. Align
    ==========================
    Wed Jun 26 16:22:00 PDT 2013
    Warning: $HADOOP_HOME is deprecated.

    packageJobJar: [/usr/local/bin/crossbow-1.2.1/Get.pm, /usr/local/bin/crossbow-1.2.1/Counters.pm, /usr/local/bin/crossbow-1.2.1/Util.pm, /usr/local/bin/crossbow-1.2.1/Tools.pm, /usr/local/bin/crossbow-1.2.1/AWS.pm, /tmp/hadoop-root/hadoop-unjar7668567870786726388/] [] /tmp/streamjob3181086113362656316.jar tmpDir=null
    13/06/26 16:22:08 INFO util.NativeCodeLoader: Loaded the native-hadoop library
    13/06/26 16:22:08 WARN snappy.LoadSnappy: Snappy native library is available
    13/06/26 16:22:08 INFO snappy.LoadSnappy: Snappy native library loaded
    13/06/26 16:22:08 INFO mapred.FileInputFormat: Total input paths to process : 21
    13/06/26 16:22:08 INFO streaming.StreamJob: getLocalDirs(): [/hdfs/hd1/hadoop/mapred, /hdfs/hd2/hadoop/mapred, /hdfs/hd3/hadoop/mapred, /hdfs/hd4/hadoop/mapred, /hdfs/hd5/hadoop/mapred, /hdfs/hd6/hadoop/mapred, /hdfs/hd7/hadoop/mapred, /hdfs/hd8/hadoop/mapred, /hdfs/hd9/hadoop/mapred, /hdfs/hd10/hadoop/mapred, /hdfs/hd11/hadoop/mapred, /hdfs/hd12/hadoop/mapred]
    13/06/26 16:22:08 INFO streaming.StreamJob: Running job: job_201305151132_0044
    13/06/26 16:22:08 INFO streaming.StreamJob: To kill this job, run:
    13/06/26 16:22:08 INFO streaming.StreamJob: /usr/lib/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=idh-hssdn1:54311 -kill job_201305151132_0044
    13/06/26 16:22:08 INFO streaming.StreamJob: Tracking URL: http://idh-hssdn1:50030/jobdetails.j...305151132_0044
    13/06/26 16:22:09 INFO streaming.StreamJob: map 0% reduce 0%
    13/06/26 16:22:33 INFO streaming.StreamJob: map 4% reduce 0%
    13/06/26 16:22:34 INFO streaming.StreamJob: map 5% reduce 0%
    13/06/26 16:22:36 INFO streaming.StreamJob: map 9% reduce 0%
    13/06/26 16:22:37 INFO streaming.StreamJob: map 10% reduce 0%
    13/06/26 16:22:39 INFO streaming.StreamJob: map 14% reduce 0%
    13/06/26 16:22:40 INFO streaming.StreamJob: map 15% reduce 0%
    13/06/26 16:22:43 INFO streaming.StreamJob: map 20% reduce 0%
    13/06/26 16:22:46 INFO streaming.StreamJob: map 23% reduce 0%
    13/06/26 16:22:47 INFO streaming.StreamJob: map 25% reduce 0%
    13/06/26 16:22:49 INFO streaming.StreamJob: map 28% reduce 0%
    13/06/26 16:22:50 INFO streaming.StreamJob: map 30% reduce 0%
    13/06/26 16:22:52 INFO streaming.StreamJob: map 32% reduce 0%
    13/06/26 16:22:53 INFO streaming.StreamJob: map 33% reduce 0%
    13/06/26 16:22:58 INFO streaming.StreamJob: map 29% reduce 0%
    13/06/26 16:22:59 INFO streaming.StreamJob: map 19% reduce 0%
    13/06/26 16:23:00 INFO streaming.StreamJob: map 10% reduce 0%
    13/06/26 16:23:01 INFO streaming.StreamJob: map 5% reduce 0%
    13/06/26 16:23:02 INFO streaming.StreamJob: map 0% reduce 0%
    13/06/26 16:23:14 INFO streaming.StreamJob: map 100% reduce 100%
    13/06/26 16:23:14 INFO streaming.StreamJob: To kill this job, run:
    13/06/26 16:23:14 INFO streaming.StreamJob: /usr/lib/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=idh-hssdn1:54311 -kill job_201305151132_0044
    13/06/26 16:23:14 INFO streaming.StreamJob: Tracking URL: http://idh-hssdn1:50030/jobdetails.j...305151132_0044
    13/06/26 16:23:14 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201305151132_0044_m_000013
    13/06/26 16:23:14 INFO streaming.StreamJob: killJob...
    Streaming Command Failed!
    Non-zero exitlevel from Align streaming job
    elastic-mapreduce script completed with exitlevel 256


    This is what I see in the logs:

    org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /crossbow/intermediate/70647/align/_temporary/_attempt_201305151132_0044_m_000006_2/part-00006 File does not exist. Holder DFSClient_attempt_201305151132_0044_m_000006_2 does not have any open files.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1802)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1793)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1848)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1836)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:719)
    at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1414)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1410)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1408)


    Any ideas how to resolve this problem? If needed I can post contents of other logs.

    Thanks,

    Abhi

  • #2
    New error observed this morning trying to re-run the example - "Broken Pipe"!!!

    org.apache.hadoop.streaming.PipeMapRed: java.io.IOException: Broken pipe
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    at java.io.DataOutputStream.flush(DataOutputStream.java:106)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:568)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
    at org.apache.hadoop.mapred.MapTask.runOldMapper_aroundBody2(MapTask.java:444)
    at org.apache.hadoop.mapred.MapTask$AjcClosure3.run(MapTask.java:1)
    at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:149)
    at com.intel.bigdata.management.agent.HadoopTaskAspect.doPhaseCall(HadoopTaskAspect.java:166)
    at com.intel.bigdata.management.agent.HadoopTaskAspect.ajc$inlineAccessMethod$com_intel_bigdata_management_agent_HadoopTaskAspect$com_intel_bigdata_management_agent_HadoopTaskAspect$doPhaseCall(HadoopTaskAspect.java:1)
    at com.intel.bigdata.management.agent.HadoopTaskAspect.aroundMap(HadoopTaskAspect.java:38)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run_aroundBody0(MapTask.java:377)
    at org.apache.hadoop.mapred.MapTask$AjcClosure1.run(MapTask.java:1)
    at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:149)
    at com.intel.bigdata.management.agent.HadoopTaskAspect.aroundTaskRun(HadoopTaskAspect.java:95)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:351)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
    at org.apache.hadoop.mapred.Child.main(Child.java:260)

    Comment


    • #3
      Is this a Hadoop configuration error? Standard hadoop tests like Terasort, Wordcount work fine. Any help is greatly appreciated. Thanks.

      Comment


      • #4
        Hello,

        I was wondering if you were able to resolve this. I am running into this problem.

        Comment


        • #5
          My problem got solved when I used the required version of Bowtie 0.12.8. Please try that first.

          Comment

          Latest Articles

          Collapse

          • seqadmin
            Advanced Tools Transforming the Field of Cytogenomics
            by seqadmin


            At the intersection of cytogenetics and genomics lies the exciting field of cytogenomics. It focuses on studying chromosomes at a molecular scale, involving techniques that analyze either the whole genome or particular DNA sequences to examine variations in structure and behavior at the chromosomal or subchromosomal level. By integrating cytogenetic techniques with genomic analysis, researchers can effectively investigate chromosomal abnormalities related to diseases, particularly...
            Yesterday, 06:26 AM
          • seqadmin
            How RNA-Seq is Transforming Cancer Studies
            by seqadmin



            Cancer research has been transformed through numerous molecular techniques, with RNA sequencing (RNA-seq) playing a crucial role in understanding the complexity of the disease. Maša Ivin, Ph.D., Scientific Writer at Lexogen, and Yvonne Goepel Ph.D., Product Manager at Lexogen, remarked that “The high-throughput nature of RNA-seq allows for rapid profiling and deep exploration of the transcriptome.” They emphasized its indispensable role in cancer research, aiding in biomarker...
            09-07-2023, 11:15 PM
          • seqadmin
            Methods for Investigating the Transcriptome
            by seqadmin




            Ribonucleic acid (RNA) represents a range of diverse molecules that play a crucial role in many cellular processes. From serving as a protein template to regulating genes, the complex processes involving RNA make it a focal point of study for many scientists. This article will spotlight various methods scientists have developed to investigate different RNA subtypes and the broader transcriptome.

            Whole Transcriptome RNA-seq
            Whole transcriptome sequencing...
            08-31-2023, 11:07 AM

          ad_right_rmr

          Collapse

          News

          Collapse

          Topics Statistics Last Post
          Started by seqadmin, Today, 06:57 AM
          0 responses
          6 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, Yesterday, 07:53 AM
          0 responses
          8 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 09-25-2023, 07:42 AM
          0 responses
          14 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 09-22-2023, 09:05 AM
          0 responses
          43 views
          0 likes
          Last Post seqadmin  
          Working...
          X