Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Adding 50Mb+ Files to a browser js-ipfs crashes Chrome #952

Closed
ya7ya opened this issue Aug 18, 2017 · 17 comments
Closed

Adding 50Mb+ Files to a browser js-ipfs crashes Chrome #952

ya7ya opened this issue Aug 18, 2017 · 17 comments
Assignees
Labels
exp/wizard Extensive knowledge (implications, ramifications) required kind/bug A bug in existing code (including security flaws) P0 Critical: Tackled by core team ASAP

Comments

@ya7ya
Copy link
Contributor

ya7ya commented Aug 18, 2017

  • Version: 0.25.1
  • Platform: Ubuntu 16.04 4.4.0-42-generic , Chrome Version 57.0.2987.133 (64-bit) and Win 10 (64-bit) Chrome Version 60.0.3112.90 (Official Build) (64-bit)
  • Subsystem: files.add

Type: Bug

Severity: Critical

Description: When trying out the exchange-files-in-browser example, uploading a file larger than 50MB crashes the browser, The Ram gradually starts filling up till it crashes.

Steps to reproduce the error:

  • go to the exchange-files-in-browser example directory.
  • run npm install
  • npm start
  • follow along with the README.md , but upload a large file, around 90MB.

Edit: Version fix.

@daviddias daviddias added kind/bug A bug in existing code (including security flaws) exp/expert Having worked on the specific codebase is important P1 High: Likely tackled by core team if no one steps up exp/wizard Extensive knowledge (implications, ramifications) required and removed exp/expert Having worked on the specific codebase is important labels Aug 19, 2017
@daviddias
Copy link
Member

daviddias commented Sep 1, 2017

@ya7ya Thanks for the report!

I believe this issue to be related to the use of WebRTC and not necessarily with large files.

@daviddias
Copy link
Member

@ya7ya wanna try disabling WebRTC and adding a large file?

@ya7ya
Copy link
Contributor Author

ya7ya commented Sep 1, 2017

@diasdavid hey, I'll try that now 👍

@daviddias daviddias removed the exp/wizard Extensive knowledge (implications, ramifications) required label Sep 1, 2017
@ya7ya
Copy link
Contributor Author

ya7ya commented Sep 1, 2017

Hey @diasdavid , I tried to disable WebRTC and it still crashed, However I had an idea and it kinda worked.

instead of adding the file buffer using files.add , I used a createAddStream and chunked the file into a Readable stream.

readFileContents(file).then((buf) => {
      // create Readable Stream from buffer. requires 'stream-buffers'
      let myReadableStreamBuffer = new streamBuffers.ReadableStreamBuffer({
        chunkSize: 2048  // in bytes.
      })

      window.ipfs.files.createAddStream((err, stream) => {
        if (err) throw err

        stream.on('data', (file) => {
          console.log('FILE : ', file)
        })

        myReadableStreamBuffer.on('data', (chunk) => {
          myReadableStreamBuffer.resume()
        })

        // add ReadableStream to content
        stream.write({
          path: file.name,
          content: myReadableStreamBuffer
        })
       // add file buffer to the stream
        myReadableStreamBuffer.put(Buffer.from(buf))
        myReadableStreamBuffer.stop()

        myReadableStreamBuffer.on('end', () => {
          console.log('stream ended.')
          stream.end()
        })

        myReadableStreamBuffer.resume()
      })
    })

the browser didn't crash, but chrome got super resource hungry and ate up alotta ram. What I did is instead of adding the content as 1 big buffer, i chunked it into a stream to throttle it. it took alot longer but it didn't crash.

@daviddias
Copy link
Member

That tells me that we need stress tests sooner than ever. Thank you @ya7ya!

@pyromaticx
Copy link

Having this same issue. I watched resources go to nearly 12GB of swap and 6GB of ram in about 20 minutes before killing the Chrome Helper process. Running js-ipfs in browser.

How could a 80mb file possibly cause all of this without some sort of severe memory leak occurring?

@daviddias
Copy link
Member

I'm curious to see if libp2p/js-libp2p-secio#92 and browserify/browserify-aes#48 will actually mitigate this problem.

@victorb
Copy link
Member

victorb commented Sep 4, 2017

If you're interested to see if the two issues @diasdavid linked above would help with this one (952) and you're using webpack, you could try adding "browserify-aes": "github:crypto-browserify/browserify-aes#master" to your package.json to use the latest version of browserify-aes, which potentially fixes this issue.

@dignifiedquire
Copy link
Member

Did some tests with that, and it worked nicely with a 70mb file for me. Doing some more perf analysis it seems we still have a few places where we do a lot of Buffer copies which get expensive and we should look into optimising those away:

screen shot 2017-09-04 at 16 48 15

Where the first sendData is an internal methods in streams, probably called from browserify-aes (streaming api)

@dignifiedquire
Copy link
Member

dignifiedquire commented Sep 4, 2017

Turned out all the stream issues from above where coming from the example and using the node stream api. Switched things to pull-streams and now we are fast and no crashing even with 300Mb: #988

@ya7ya
Copy link
Contributor Author

ya7ya commented Sep 4, 2017

@dignifiedquire Thanks 👍 , This is a much more elegant solution. That's what I had in mind in the first place but my pull-streams game is still weak 😄.

@ya7ya
Copy link
Contributor Author

ya7ya commented Sep 5, 2017

@dignifiedquire I have a noob question Regarding this file uploader, How do you measure progress in a pull-stream, Like How much of the data was pulled to files.createAddPullStream so far? That would be useful for creating a progress bar for large files

pull(
      pull.values(files),
      pull.through((file) => console.log('Adding %s', file)),
      pull.asyncMap((file, cb) => pull(
        pull.values([{
          path: file.name,
          content: pullFilereader(file)
        }]),
        node.files.createAddPullStream(),
        pull.collect((err, res) => {
          if (err) {
            return cb(err)
          }
          const file = res[0]
          console.log('Adding %s finished', file.path)
          cb(null, file)
        }))),
      pull.collect((err, files) => {
        if (err) {
          throw err
        }
        if (files && files.length) {
          console.log('All Done')
        }
      })
    )

@dignifiedquire
Copy link
Member

dignifiedquire commented Sep 5, 2017

content: pull(
  pullFilereader(file),
  pull.through((chunk) => updateProgress(chunk.length))
)

@ya7ya
Copy link
Contributor Author

ya7ya commented Sep 5, 2017

@dignifiedquire Thanks alot dude 👍 , It works. I appreciate the help.

@Beanow
Copy link

Beanow commented Sep 11, 2017

Perhaps dignifiedquire/pull-block#2 would solve the crashes with the original files.add approach you had?

@daviddias daviddias added exp/wizard Extensive knowledge (implications, ramifications) required status/ready Ready to be worked labels Sep 12, 2017
@daviddias daviddias added P0 Critical: Tackled by core team ASAP and removed P1 High: Likely tackled by core team if no one steps up labels Oct 11, 2017
@daviddias daviddias added status/in-progress In progress and removed status/ready Ready to be worked labels Nov 6, 2017
@daviddias daviddias self-assigned this Nov 6, 2017
@Beanow
Copy link

Beanow commented Nov 16, 2017

I ran the old example and indeed it should be much better with pull-block 1.4.0 landing there.

@daviddias
Copy link
Member

Lot's of improvements added indeed! Let's redirect our attention to PR #1086 (comment), I successfully managed to upload more than 750Mb and fetch it through the gateway.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
exp/wizard Extensive knowledge (implications, ramifications) required kind/bug A bug in existing code (including security flaws) P0 Critical: Tackled by core team ASAP
Projects
None yet
Development

No branches or pull requests

6 participants