Header Leaderboard Ad

Collapse

Importing and processing data in R line by line

Collapse

Announcement

Collapse

SEQanswers June Challenge Has Begun!

The competition has begun! We're giving away a $50 Amazon gift card to the member who answers the most questions on our site during the month. We want to encourage our community members to share their knowledge and help each other out by answering questions related to sequencing technologies, genomics, and bioinformatics. The competition is open to all members of the site, and the winner will be announced at the beginning of July. Best of luck!

For a list of the official rules, visit (https://www.seqanswers.com/forum/sit...wledge-and-win)
See more
See less
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Importing and processing data in R line by line

    I am analyzing large datasets in R. To analyze data, my current practice is to import the entire dataset into the R workspace using the read.table() function. Rather than importing the entire dataset, however, I was wondering if it is possible to import, analyze and export each line of data individually so that the analysis would take up less computer memory.

    Can this be done? And if so, how?

  • #2
    Originally posted by gwilymh View Post
    I am analyzing large datasets in R. To analyze data, my current practice is to import the entire dataset into the R workspace using the read.table() function. Rather than importing the entire dataset, however, I was wondering if it is possible to import, analyze and export each line of data individually so that the analysis would take up less computer memory.

    Can this be done? And if so, how?
    Hi- R is designed to read all the datafile in one go. Reading line by line might be possible but is probably going to be horribly slow. However, instead of reading line by line you could read-in chunks of several lines in a loop like (pseudocode):

    Code:
    totlines<- 10000000 ## Number of lines in your big input. Get it from wc -l
    skip<- 0
    chunkLines= 10000 ## No. of lines to read in one go. Set to 1 to really read one line at a time.
    while (skip < totlines){
        df<- read.table(myinput, skip= skip, nrows= chunkLines, stringsAsFactors= FALSE)
        skip<- skip + chunkLines
        [...do something with df...]
    }
    Essentially you use args skip and nrows to read chunks of lines. To speed-up read.table set stringsAsFactors to false.

    A better alternative might be to use packages designed for dealing with data larger than memory, ff (http://cran.r-project.org/web/packages/ff/index.html) is one of them.

    Hope this helps!

    Dario

    Comment


    • #3
      Thanks Dario, much appreciated.

      Comment


      • #4
        Check out the readLines function in R.

        Comment

        Latest Articles

        Collapse

        ad_right_rmr

        Collapse

        News

        Collapse

        Topics Statistics Last Post
        Started by seqadmin, Yesterday, 07:14 AM
        0 responses
        5 views
        0 likes
        Last Post seqadmin  
        Started by seqadmin, 06-06-2023, 01:08 PM
        0 responses
        6 views
        0 likes
        Last Post seqadmin  
        Started by seqadmin, 06-01-2023, 08:56 PM
        0 responses
        157 views
        0 likes
        Last Post seqadmin  
        Started by seqadmin, 06-01-2023, 07:33 AM
        0 responses
        293 views
        0 likes
        Last Post seqadmin  
        Working...
        X